Daily Tech Digest - June 10, 2023

Vetting an Open Source Database? 5 Green Flags to Look for

There’s an important difference between offerings that are legitimate open source versus open source-compatible. “Captive” open source solutions pose as the original open source solution from which they originated, but in reality, they are merely branches of the original code. This can result in compromised functionality or the inability to access features introduced in newer versions of the true open source solution, as the branching occurred prior to the introduction of those features. “Fake” open source can feature restrictive licensing, a lack of source code availability and a non-transparent development process. Despite this, these solutions are sometimes still marketed as open source because, technically, the code is open to inspection and contributions are possible. But when it comes down to it, the license is held by a single company, so the degree of freedom is minute compared to that of actual open source. The key is to minimize the gap between the core database and its open source origins.


Zero trust and cloud capabilities essential for data management in enterprises

The challenge, however, lies in implementing a complete solution guided by the seven pillars of Zero Trust. No company can do this alone. To help private and public sector organizations simplify adoption, Dell is building a Zero Trust ecosystem. It brings together more than thirty leading technology and security companies to create a unified solution across infrastructure platforms, applications, clouds, and services. PowerStore has always had a strong “security DNA,” safeguarding data with advanced capabilities like hardware root of trust, data-at-rest encryption and AIOps security analytics. As with everything about the platform, the focus is simplicity and automation – delivering “always on” protection without increasing management complexity or relying on human vigilance to be effective. In 2023, the newest PowerStoreOS release adds even more cybersecurity features to meet the stringent requirements, while also enabling an authentic Zero Trust experience for business solutions.


Expecting Too Much From CISOs Can Drive Them Out The Door

“The CISO is there to raise the risk, to shine light on it, to offer solutions, to differentiate and prioritize what needs to be fixed,” he explained. “You can’t ask the CISO to do everything and everything; you need to give them the support — and give them a team that can really make sure the cybersecurity and risk management program is well-functioning.” Expecting too much from CISOs — as so many company boards still do — continues to drive attrition from the security function at a brisk pace, with burnout and the desire for greener pastures pushing 24 percent of Fortune 500 CISOs to switch roles within a year of starting. ... The increasing complexity of the modern cybersecurity defense has dovetailed with the rapid expansion of managed service providers like eSentire, whose ability to offer the full breadth of security capabilities — and to do so confidently enough to offer guarantees like four-hour response times for remote threat suppression — puts them well ahead of anything the average corporate information security department can provide.


SRE Brings Modern Enterprise Architectures into Focus

If the business commitment is that users will reliably have enough light to see what they are doing (service level), an SLO could be that one brightly lit lamp (availability) is maintained for every 10 square feet of space. ... In application delivery systems these could look like CPU utilization, API call and database query time, etc. It’s up to the site reliability engineers to define the SLI measures that impact the business SLOs and what responses will be taken when they fall below specific thresholds by adjusting operating policies and configuration. ... The measures, thresholds, and responses are the intersection of SRE with the other domains of a modern enterprise architecture designed for the application delivery of a digital business. Operational data—telemetry—feeds the observability of the defined measures and thresholds set by SRE. Automation is the combined application of tools, technologies, and practices to enable site reliability engineers to scale defined responses with less toil, thus enabling the efficient satisfaction of the SLOs of a digital service. 


What LOB leaders really think about IT: IDC study

For many IT leaders, turning that tide may require a new approach. CIOs can demonstrate their value to the business and earn that seat at the table by tying what they do to business goals, Thomson suggested. “One of the biggest challenges that IT people have is being able to communicate their business value in a language that the business understands,” she said. “Talking in business outcomes is the currency that enables IT to gain trust and show the value that they’re delivering.” In addition to mastering business concepts and taking steps to prove the value of IT, CIOs who are succeeding at this are putting in place seamless teams where there’s no wall between IT and the business, she said. “It’s just seen as one cross-functional team where everybody understands the common goal that is driving all the business decisions.” Such strategic maneuvers are essential to becoming a digital business, one where value creation is based on and dependent on the use of digital technologies, from how processes are run to the products, services, and experiences it provides, Thomson said.


Microsoft commits to supporting customers on their responsible AI journeys

The commitments include sharing Microsoft's expertise while teaching others to develop AI safely, establishing a program to ensure AI applications are created to follow legal regulations, and pledging to support the company's customers in implementing Microsoft's AI systems responsibly within its partner ecosystem. "Ultimately, we know that these commitments are only the start, and we will have to build on them as both the technology and regulatory conditions evolve," Cook wrote in the statement shared by Microsoft. Though the company only recently developed its Bing Chat generative AI tool, Microsoft will start by sharing key documents and methods that detail the company's expertise and knowledge gained since beginning its journey into AI years ago. The company will also share training curriculums and invest in resources to teach others how to create a culture of responsible AI use within organizations working with the technology. Microsoft will establish an "AI Assurance Program" to leverage its own experiences and apply the financial services concept called "Know your customer" to AI development.


Data Privacy Standard Contractual Clauses Called Into Question After Meta Ireland Fine

Although this decision deals a particularly large blow to Meta, all entities relying upon SCCs to complete data transfers from the EU to the U.S. are now affected. Due to the continued and wide-reaching effects of the U.S.’s strategy on surveillance, we’ve now entered yet another period of uncertainty, and the ability to lawfully transfer personal data into the U.S. from the EU and United Kingdom is again in question. ... As a remedy, the DPC has given Meta five months to suspend all transfers of personal data to the U.S., bring its processing activities into compliance with EU law, and delete any EU personal data that been transferred unlawfully under this decision. The EU has long struggled with how to regulate EU personal data transfers to the U.S. After the invalidation of the U.S.-EU Safe Harbor Agreement and the U.S.-EU Privacy Shield in the Schrems I & Schrems II decisions, entities including Meta have mostly relied on SCCs to lawfully transfer EU personal data into the U.S. where U.S. laws are considered to provide substantially less protection.


5 Critical Data Governance Truths Every Data Leader Should Be Aware Of

Implementing a comprehensive data governance program comes with a significant price tag. As a result, firms can easily spend over US$1 million annually just on resources to maintain data integrity. However, the risks associated with poor data governance are many, for instance, reputational damage, lost revenue, and more. Therefore, making decisions based on inaccurate data is costly, leading to poor business outcomes. ... Data governance is misunderstood to be solely about data. However, it's vital to understand data governance is about components, each playing a crucial role in ensuring data is managed effectively and efficiently. ... A good data governance program is one with KPIs. The KPIs should be specific, measurable, and understandable by everyone in the organization. By measuring these KPIs regularly and providing timely feedback, managers can determine whether their efforts are paying off or not. They can also communicate value metrics to key executives.


CDEI publishes portfolio of AI assurance techniques

The "portfolio of AI assurance techniques" was created to help anyone involved in designing, developing, deploying or otherwise procuring AI systems do so in a trustworthy way, by giving examples of real-world auditing and assurance techniques. “AI assurance is about building confidence in AI systems by measuring, evaluating and communicating whether an AI system meets relevant criteria,” said the CDEI, adding these criteria could include regulations, industry standards or ethical guidelines. “Assurance can also play an important role in identifying and managing the potential risks associated with AI. To assure AI systems effectively we need a range of assurance techniques for assessing different types of AI systems, across a wide variety of contexts, against a range of relevant criteria.” The portfolio specifically contains case studies from multiple sectors and a range of technical, procedural and educational approaches, to show how different techniques can combine to promote responsible AI.


Consolidating your cyber security strategy

From a security perspective, consolidating threat defence into one system means that all devices and endpoints can be set to one standard, minimising the opportunity for weak spots and gaps to appear. In the event of a breach, such as a member of staff clicking a malicious link, an XDR system can isolate the threat to stop it spreading and roll-back the endpoint to a safe state. Although changing cyber security tactics should not be viewed as a cost cutting solution, vendor consolidation can certainly save money. By replacing multiple products that may overlap, reducing the man hours spent monitoring different systems and avoiding the consequences of a successful breach, businesses can get a better return on their investment. Not all XDR systems are the same, and it is important to choose one that best suits the needs of a business. XDR has traditionally only been available for large enterprises. However, finding the right partnership can allow small and medium sized companies to customise the solution to fit their requirements without unnecessary extras.



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson

Daily Tech Digest - June 09, 2023

Why Protecting Data Centers Requires A Personalized Security Approach

Since each industry has its own unique security and privacy needs, businesses should work with security providers to vet their services and ensure they’re a vertical fit. Beyond HIPAA and PCI, these can also include standards in government like FISMA and FEDRAMP as well as FERPA in education. For businesses in these industries, partnering with a security provider with background in their respective vertical is a must. Security needs vary from data center to data center, so security providers must do a thorough analysis of all potential risks and threats. These solutions providers should ask hard questions of their customers to truly understand the security level needed. Businesses need to be prepared for a worst-case scenario and determine how they can secure customer data in the event of a disruption. If there’s a power outage, how long can they be down for? If they’re a retail business, what’s the impact on the bottom line if an outage happens on Black Friday? How much damage to a business’ reputation will happen if customer information is leaked in a breach? 


ChatGPT’s ‘Perfect Storm’: Managing Risk and Eyeing Transformational Change

At the eye of this storm lies the rapid evolution of ChatGPT’s capabilities, marking the advent of what we refer to as the “Age of AI” or the “Fourth Industrial Revolution.” I shed light on ChatGPT’s transformational capabilities, especially its potential to reshape business operations. In my personal experience, ChatGPT has proven itself valuable in tasks such as drafting initial document versions and creating LinkedIn posts, even suggesting suitable emojis! However, accompanying this storm is a limited understanding of associated risks, further compounded by an absence of a regulatory framework tailored to such advanced AI models and varying levels of organizational preparedness for AI-driven future. ...  It calls for interdisciplinary collaboration involving technological expertise, regulatory compliance, risk management and operational understanding. By ensuring this balanced and holistic approach, organizations can fully exploit the advantages of AI technologies like ChatGPT while mitigating potential risks and pitfalls.


The Six Disruptive Forces That Will Shape Your Business’s Future

Technological advances and digital innovations – the primary driver of growth in the US economy during the past 25 years – will continue to drive new business models and ecosystem relationships. The past few decades witnessed a massive explosion of computing and communications capability, along with the scaling of new business models, and the ability to connect every person on the planet through the internet. The next decade promises even more of this, perhaps exponentially so, driven by technologies such as artificial intelligence, blockchain, 5G networks, and edge computing. ... A proliferation of new communication technologies enabled the widespread adoption of hybrid work models in the wake of the pandemic. Some see this shift in working patterns as an evolutionary step in how work occurs – an incremental change. We see it differently: in our view, remote work represents a step change in how labor markets are organized, raises big questions about productivity, and creates important collateral effects in other areas of the economy.


How to use the new AI writing tool in Google Docs and Gmail

The AI tools in Slides and Sheets are not yet available, but Help Me Write is in limited preview; you can try it out in Google Docs or Gmail on the web by signing up for access to Workspace Labs with your Google account. (You’ll be put on a waitlist before being granted access.) Like the well-known ChatGPT, Help Me Write is a chatbot tool that generates written text based on prompts (instructions) that you give it. Whether you’re a professional writer or someone who dreads having to write for your job, the potential of AI assistance for your writing tasks is appealing. Help Me Write can indeed write long passages of text that are reasonably readable. But its results come with caveats including factual errors, redundancy, and too-generic prose. This guide covers how to use Help Me Write in both Google Docs and Gmail to generate and rewrite text, and how to overcome some of the tool’s shortcomings. Because it’s in preview status, keep in mind that there may be changes to its features, and the results it generates, when it’s finally rolled out to the public.


Winning the Mind Game: The Role of the Ransomware Negotiator

Professional negotiation is the act of taking advantage of the professional communication with the hacker in various extortion situations. The role comprises four key elements:1. Identifying the scope of the event - Takes place within the first 24-48 hours. Includes understanding what was compromised, how deep the attackers are in the system, whether the act is a single, double or triple ransomware, if the attack was financially motivated or if it was a political or personal attack, etc. In 90% of cases, the attack is financially motivated. If it is politically motivated, the information may not be recovered, even after paying the ransom.2. Profiling the threat actor - Includes understanding whether the group is known or unknown, their behavioral patterns and their organizational structure. Understanding who the attacker is influences communication. ... This can be used for improving negotiation terms, like leveraging public holidays to ask for a discount.3. Assessing the "cost-of-no-deal" - Reflecting to the decision makers and the crisis managers what will happen if they don't pay the ransom.


RFI vs. RFP vs. RFQ: What are the differences?

Each document -- a request for information (RFI), a request for proposal (RFP) and a request for quote (RFQ) -- has a distinct purpose when undertaking a significant project, even if some overlap exists. While it's possible to issue all three types of requests for a single project, buying teams will typically only issue one or two of them, given the overlap. ... Software buying teams use a request for information when they want additional information from vendors before finalizing the RFP or RFQ. The buying team may lack clarity on requirements, want more information on available options in the market or need details validated, which the vendors' industry experts can do. ... The RFP will list the requirements in detail, provide a recommended timeline and request pricing from the vendors. The buying team might ask specific questions about the vendor, such as the length of time they've been in business, completion proportion of similar projects, annual sales and number of staff. The RFP response may have mandatory terms to follow, such as a submission due date and other critical information.


Contextual Computing and the Internet of Things: A Perfect Match

The convergence of contextual computing and the IoT is a natural progression, as both technologies rely on data to function effectively. By combining the contextual awareness of AI-powered systems with the vast amounts of data generated by IoT devices, we can create intelligent systems that are capable of making real-time decisions and providing personalized experiences. One of the most significant benefits of this convergence is the ability to create more efficient and sustainable systems. For example, in the realm of energy management, IoT devices can collect data on energy consumption patterns, while contextual computing can analyze this data to identify inefficiencies and suggest improvements. This could lead to the development of smart grids that optimize energy distribution and reduce waste, ultimately contributing to a more sustainable future. Another area where the combination of contextual computing and the IoT can have a significant impact is in healthcare. 


Beyond Requirements: Tapping the Business Potential of Data Governance and Security

The teams responsible for data protection and security have often been pitted against the teams that want to leverage data for business insight. This conflict is unsustainable when the business needs maximum agility to respond to volatile market conditions and unexpected competitive pressures. In fact, the alignment of internal objectives and incentives is an opportunity to accelerate outcomes for the business. ... Functions of data governance, data security and data privacy are becoming increasingly interdependent within the enterprise. Stakeholder communication and collaboration are critical. But in many cases, there is a counterproductive feedback loop inhibiting this critical cultural alignment. Siloed technology often obstructs meaningful interdisciplinary collaboration, which prevents the adoption of more unified supporting technologies. In this sense, both automation and integration should be key areas of technological focus for today’s businesses. Now is the time for change, as many organizations risk falling behind in their data governance and security efforts.


Cybersecurity Pioneer Calls for Regulations to Restrain AI

“We know that you can use deep fakes to do scams or business email compromise attacks or what have you.” Current tools gave criminals and other bad actors the ability to generate unlimited personas, which could be used for multiple types of scams. More broadly, the march of AI also means that whatever can be done purely online can be done through automation and large-scale language models like ChatGPT, he said, which has obvious implications for developers. However, he said, humans are harder to replace where there’s an interface between the real world and online technology. Rather than studying to build software frameworks for the cloud, he said, “You should be studying to build software frameworks for, let’s say, medical interfaces for human health because we still need the physical world. For humans to work with humans to fix their diseases.” Looking slightly further ahead, he said that people who worried about the likes of ChatGPT becoming too good, or achieving AGI, “haven’t paid attention”, as that is precisely what the declared goal of OpenAI is.


The steep cost of a poor data management strategy

For many organizations, the real challenge is quantifying the ROI benefits of data management in terms of dollars and cents. Unlike other business investments, the returns may not be immediately apparent because the benefits accrue over time. This places a major focus on the initial investment instead of the potential outcomes and ROI, often disguising data management’s incredible value. Let’s look at how we can resolve this—while there is still time to do so. Regardless of your industry, data is central to almost every business today. Leveraging that data, in AI models, for example, depends entirely on the accessibility, quality, granularity, and latency of your organization’s data. Without it, organizations incur a significant opportunity cost. A few years ago, Gartner found that “organizations estimate the average cost of poor data quality at $12.8 million per year.’” Beyond lost revenue, data quality issues can also result in wasted resources and a damaged reputation. 



Quote for the day:

"Even the demons are encouraged when their chief is "not lost in loss itself." -- John Milton

Daily Tech Digest - June 08, 2023

5 Reasons Why IT Security Tools Don't Work For OT

While IT and OT both seek to ensure confidentiality (the protection of sensitive data and assets), integrity (the fidelity of data over its lifecycle), and availability (the accessibility and responsiveness of resources and infrastructure), they prioritize different pieces of this CIA triad.IT's highest priority is confidentiality. IT deals in data, and the stakeholders of IT concern themselves with protecting that data — from trade secrets to the personal information of users and customers. OT's highest priority is availability. OT processes operate heavy-duty equipment in the physical realm, and for them, availability means safety. Downtime is simply untenable when shutting off a blast furnace or industrial boiler tank. For the sake of availability and responsiveness, most OT components weren't built to accommodate security implementations at all. ... Almost all IT-based tools require downtime for installation, updates, and patching. These activities are generally a non-starter for industrial environments, no matter how significant a vulnerability may be. Again, downtime for OT systems means putting safety at risk.


Oshkosh CIO Anu Khare on IT’s pursuit of value

VSP stands for value, strategic fit, and passionate sponsor. The framework ties to my fundamental philosophy of letting cost, value, and the customer decide what is valuable and what is not valuable for our customers. We didn’t start with VSP, but it evolved as a guiding framework, as we looked at our portfolio enablement process and asked ourselves, what’s the simplest way to approach project portfolio management? First, we decided to focus on the value. We started working with the business sponsors to articulate where and what impact the technology will have on the business. We then validate with finance, and if it has a hard savings, it gets No. 1 priority in terms of investment. The relentless focus on value also leads to the second point, which is strategic fit. The project may be valuable, but in any organization, the list of things the organization can do is always bigger than what the organization can afford or should afford. This is a capital allocation discussion? So we focus on the strategic fit. 


Cisco spotlights generative AI in security, collaboration

Security and IT administrators will be able to describe granular security policies and the assistant willl evaluate how to best implement them across different aspects of their security infrastructure, Patel said. At the Live! event, Cisco demoed how a generative Cisco Policy Assistant can reason with the existing set of firewall policy rules to implement and simplify them within the Cisco Secure Firewall Management Center. Cisco says it is the first of many examples of how generative AI can reimagine policy management across the Cisco Security Cloud. ... In addition, he said the security assistant will let customers describe and contextualize events across email, the web, endpoints, and the network to tell security operation center (SOC) analyst exactly what happened, the impact, and best next steps to take to remediate problems and set new policies. The SOC Assistant will provide a comprehensive situation analysis for analysts, correlating intel across the Cisco Security Cloud, relaying potential impacts, and providing recommended actions with the goal of reducing the time needed for SOC teams to respond to potential threats, he said.


How WASM (and Rust) Unlocks the Mysteries of Quantum Computing

Rather than picking from fixed specs, quantum programming can require you to define the setup of your quantum hardware, describing the quantum circuit that will be formed by the qubits and as well as the algorithm that will run on it — and error-correcting the qubits while the job is running — with a language like OpenQASM; that’s rather like controlling an FPGA with a hardware description language like Verilog. You can’t measure a qubit to check for errors directly while it’s working or you’d end the computation too soon, but you can measure an extra qubit and extrapolate the state of the working qubit from that. What you get is a pattern of measurements called a syndrome. In medicine, a syndrome is a pattern of symptoms used to diagnose a complicated medical condition like fibromyalgia. In quantum computing, you have to “diagnose” or decode qubit errors from the pattern of measurements, using an algorithm that can also decide what needs to be done to reverse the errors and stop the quantum information in the qubits from decohering before the quantum computer finishes running the program.


Energy security needs a secure IoT

The IoT has a central role to play as governments and industries work to reduce dependence on fossil fuels, establish new forms of energy generation and implement sufficient means of storing, managing and distributing energy. ... IoT connected devices and systems can contribute carbon tracking and smart-meter energy monitoring; they can enable data exchange for microgrids and support mechanisms for selling energy directly back into the network. These solutions will transmit data so that energy companies can monitor devices and conditions, control devices in remote locations, track performance to predict maintenance cycles and act on alerts. They will be able to monitor energy consumption for smart metering through connected meters and sensors for load balancing on the grid. In this way, connectivity is part of the intelligent, efficient, renewable energy model, however it must be cybersecure. As new and additional devices are deployed, they could present more pathways for potential cyberattacks. That is a significant risk and safeguards are therefore needed to protect against unauthorised access to devices, networks, management platforms and cloud infrastructure. 


How to Get Unstuck From Stress and Find Solutions Inside Yourself

The balance of sympathetic and parasympathetic states is critical both for our well-being and for the cultivation of presence. Neither state is superior to the other. They are opposite and equal in their importance. Both are needed to dynamically maintain the homeostasis of the body. (Remember, a state of polarity is the ability to go from one state to the other in alternation, as needed.) As with any ecosystem, complementary forces are necessary to preserve harmony. The trouble is that our regular thinking and doing in the world of business are sympathetically activating. It is not possible to use only the mind to become relaxed and restore balance to the nervous system. We need to counterbalance our SNS (sympathetic nervous system) activation through feeling and being. This is a whole new mode that many high-powered leaders are less familiar with and may not entirely trust. The good news, however, is that when we are in a relaxed, parasympathetic state, we can access the capabilities of our higher intelligence that we need for presence and collaboration, such as visualization and spontaneous generative creativity.


Daily Standups May Not Improve Your Team’s Agility

To make sure every team member gets the support they need, I highly recommend having at least once per week a longer team meeting, something we call “team time”. This meeting should be 30–45 min long and ensure there is enough time to really get to the bottom of a problem and find a solution. Every team member can propose a topic and the team discusses it together. If there are no challenges to discuss, this is also a great forum for other ways of knowledge share. When you are summing up these costs, you will be in a similar or even more expensive range than daily standups, but those meetings are actually helpful since they allow the team to solve problems and share knowledge and, with that, replace other meetings and make work more efficient. The social aspect is something that is rarely stated as a need for daily standups. But, for me, this is a misconception. A healthy and social team will always be an efficient team. Developing a proper team atmosphere and spirit should be key and in the interest of everyone. 


Everything Is Connected: Five IoT Trends Moving Forward

In what sounds like old news at this point, cybersecurity will continue to be at the forefront of business decision making. What is different this year is the rise of artificial intelligence (AI) and ML. AI and ML are making malicious actors more efficient and potentially more effective when carrying out attacks. Natural Language Models such as ChatGPT have opened new directions of attack as well as lowering the overall threshold for creating effective malicious code. Additionally, the changing legislative landscape around privacy will spur companies to take a hard look at the way that they collect, use, and retain sensitive personal data. This may require a complete redesign of products, procedures, or in fact, entire business models. ... Finally, it is no secret that the tech labor market is in a state of upheaval. Many companies are reducing or restricting their workforces as they seek efficiency or profits. This exodus of talented tech professionals has created severe knowledge gaps that must be addressed.


API Management Is a Commodity: What’s Next?

As API management software unbundles the gateway and adapts to the multi-gateway world, new and emerging software vendors are looking to fill the resulting requirement gaps for API design and development, security, analytics, portals, and marketplaces. Alex Walling, field CTO for Rapid, sees that developers need a layer of abstraction on top of their existing API gateways, such as those from WSO2, Kong, and Apigee so that they can find APIs easily and check whether someone has already developed an API for what they need. Moreover, Derric Gilling, CEO of Moesif, said he believes that API Gateways will become just one of the specialized pieces of the API stack developers and organizations will need to assemble to meet the growing adoption of APIs. He sees business models for APIs evolving beyond simply charging for API invocation counts, and the need for a specialized analytics solution to keep pace. Along with the continued explosion of interest in APIs, especially as organizations use more third-party APIs, the development and testing process becomes more complex and time-consuming.


AI: Interpreting regulation and implementing good practice

Emerging standards, guidance and regulation for AI are being created worldwide, and it will be important to align this and create a common understanding for producers and consumers. Organizations such as ETSI, ENISA, ISO and NIST are creating helpful cross-referenced frameworks for us to follow, and regional regulators, such as the EU, are considering how to penalize bad practices. In addition to being consistent, however, the principles of regulation should be flexible, both to cater for the speed of technological development and to enable businesses to apply appropriate requirements to their capabilities and risk profile. An experimental mindset, as demonstrated by the Singapore Land Transport Authority’s testing of autonomous vehicles, can allow academia, industry and regulators to develop appropriate measures. These fields need to come together now to explore AI systems’ safe use and development. Cooperation, rather than competition, will enable safer use of this technology more quickly.



Quote for the day:

"Men who are in earnest are not afraid of consequences." -- Marcus Garvey

Daily Tech Digest - June 07, 2023

The Design Patterns for Distributed Systems Handbook

Some people mistake distributed systems for microservices. And it's true – microservices are a distributed system. But distributed systems do not always follow the microservice architecture. So with that in mind, let's come up with a proper definition for distributed systems: A distributed system is a computing environment in which various components are spread across multiple computers (or other computing devices) on a network. ... If you decide that you do need a distributed system, then there are some common challenges you will face:Heterogeneity – Distributed systems allow us to use a wide range of different technologies. The problem lies in how we keep consistent communication between all the different services. Thus it is important to have common standards agreed upon and adopted to streamline the process. Scalability – Scaling is no easy task. There are many factors to keep in mind such as size, geography, and administration. There are many edge cases, each with their own pros and cons. Openness – Distributed systems are considered open if they can be extended and redeveloped.


Shadow IT is increasing and so are the associated security risks

Gartner found that business technologists, those business unit employees who create and bring in new technologies, are 1.8 times more likely than other employees to behave insecurely across all behaviors. “Cloud has made it very easy for everyone to get the tools they want but the really bad thing is there is no security review, so it’s creating an extraordinary risk to most businesses, and many don’t even know it’s happening,” says Candy Alexander, CISO at NeuEon and president of Information Systems Security Association (ISSA) International. To minimize the risks of shadow IT, CISOs need to first understand the scope of the situation within their enterprise. “You have to be aware of how much it has spread in your company,” says Pierre-Martin Tardif, a cybersecurity professor at Université de Sherbrooke and a member of the Emerging Trends Working Group with the professional IT governance association ISACA. Technologies such as SaaS management tools, data loss prevention solutions, and scanning capabilities all help identify unsanctioned applications and devices within the enterprise.


Worker v bot: Humans are winning for now

Ethical and legislative concerns aside, what the average worker wants to know is if they’ll still have a job in a few years’ time. It’s not a new concern: in fact, jobs are lost to technological advancements all the time. A century ago, most of the world’s population was employed in farming, for example. Professional services company Accenture asserts that 40% of all working hours could be impacted by generative AI tools — primarily because language tasks already account for just under two thirds of the total time employees work. In The World Economic Forum’s (WEF) Future of Jobs Report 2023, jobs such as clerical or secretarial roles, including bank tellers and data entry clerks, are reported as likely to decline. Some legal roles, like paralegals and legal assistants, may also be affected, according to a recent Goldman Sachs report. ... Customer service roles are also increasingly being replaced by chatbots. While chatbots can be helpful in automating customer service scenarios, not everyone is convinced. Sales-as-a-Service company Feel offers, among other services, actual live sales reps to chat with online shoppers.


The Future of Continuous Testing in CI/CD

Continuous testing is rapidly evolving to meet the needs of modern software development practices, with new trends emerging to address the challenges development teams face. Three key trends currently gaining traction in continuous testing are cloud-based testing, shift-left testing and security testing. These trends are driven by the need to increase efficiency and speed in software development while ensuring the highest quality and security levels. Let’s take a closer look at these trends. Cloud-Based Testing: Continuous testing is deployed through cloud-based computing, which provides multiple benefits like ease of deployment, mobile accessibility and quick setup time. Businesses are now adopting cloud-based services due to their availability, flexibility and cost-effectiveness. Cloud-based testing doesn’t require coding skills or setup time, which makes it a popular choice for businesses. ... Shift-Left Testing: Shift-left testing is software testing that involves testing earlier in the development cycle rather than waiting until later stages, such as system or acceptance testing.


IT is driving new enterprise sustainability efforts

There’s an additional sustainability benefit to modernizing applications, says Patel at Capgemini. “Certain applications are written in a way that consumes more energy.” Digital assessments can help measure the carbon footprint of internally developed apps, she says. Modern application design is key to using the cloud efficiently. At Choice Hotels, many components now run as services that can be configured to automatically shut down during off hours. “Some run as micro processes when called. We’re using serverless technologies and spot instances in the AWS world, which are more efficient, and we’re building systems that can handle it when those disappear,” Kirkland says. “Every digital interaction has a carbon price, so figure out how to streamline that,” advises Patel. This includes business process reengineering, as well as addressing data storage and retention policies. For example, Capgemini engages employees in sustainable IT by holding regular “digital cleaning days” that include deleting or archiving email messages and cleaning up collaborative workspaces.


SRE vs. DevOps? Successful Platform Engineering Needs Both

The complexity of managing today’s cloud native applications drains DevOps teams. Building and operating modern applications requires significant amounts of infrastructure and an entire portfolio of diverse tools. When individual developers or teams choose to use different tools and processes to work on an application, this tooling inconsistency and incompatibility causes delays and errors. To overcome this, platform engineering teams provide a standardized set of tools and infrastructure that all project developers can use to build and deploy the app more easily. Additionally, scaling applications is difficult and time-consuming, especially when traffic and usage patterns change over time. Platform engineering teams address this with their golden paths — or environments designed to scale quickly and easily — and logical application configuration. Platform engineering also helps with reliability. Development teams that use a set of shared tools and infrastructure tested for interoperability and designed for reliability and availability make more reliable software.


Zero Trust Model: The Best Way to Build a Robust Data Backup Strategy

A zero trust model changes your primary security principle from the age-old axiom “trust but verify” to “never trust; always verify.” Zero trust is a security concept that assumes any user, device, or application seeking access to a network is not to be automatically trusted, even if it is within the network perimeter. Instead, zero trust requires verification of every request for access, using a variety of security technologies and techniques such as multifactor authentication (MFA), least-privilege access, and continuous monitoring. A zero trust environment provides many benefits, though it is not without its flaws. Trust brokers are the central component of zero trust architecture. They authenticate users’ credentials and provide access to all other applications and services, which means they have the potential to become a single point of failure. Additionally, some multifactor authentication processes might cause users to wait a few minutes before allowing them to login, which can hinder employee productivity. The location of trust brokers can also create latency issues for users. 


How to Manage Data as a Product

The way most organizations go about managing data is out of step with the way people want to use data, says Wim Stoop, senior director of product marketing at Cloudera. “If you want to get your teeth fixed or your appendix out you go to an expert rather than a generalist,” he says. “The same should apply to the data that people in organizations need.” However, most enterprises treat data as a centralized and protected asset. It’s locked up in production applications, data warehouses, and data lakes that are administered by a small cadre of technical specialists. Access is tightly controlled, and few people are aware of data the organization possesses outside of their immediate purview. The drive towards organization agility has helped fuel interest in the data mesh. “Individual teams that are responsible for data can iterate faster in a well-defined construct,” Stoop says. “The shift to treating data as a product breaks down siloes and gives data longevity because it’s clearly defined, supported and maintained by the employees that know it intimately.”


Preparing for the Worst: Essential IT Crisis Preparation Steps

Crisis preparation begins with planning -- outlining the steps that must be taken in the event of a crisis, as well as procedures for data backup and recovery, network security, communication with stakeholders, and employee safety, says O’Brien, who founded the founded the Yale Law School Privacy Lab. “Every organization should conduct regular drills and simulations to test the effectiveness of their plan,” he adds. Every enterprise should appoint an overall crisis management coordinator, an individual responsible for ensuring that there’s a coordinated, updated, and rehearsed crisis management plan, Glair advises. He also recommends creating a crisis management chain of authority that’s ready to jump into action as soon as a crisis event occurs. The crisis management coordinator may report directly to any of several enterprise departments, including risk management, legal, operations, or even the CIO or CFO. “The reporting location is not as important as the authority the coordinator is granted to prepare and manage the crisis management strategy,” he says.


How to make developers love security

Developers hate being slowed down or interrupted. Unfortunately, legacy security testing systems often have long feedback loops that negatively impact developer velocity. Whether it’s complex automated scans or asking the security team to complete manual reviews, these activities are a source of friction. They increase the delay between making a change and verifying its effect. Security suites with many different tools can result in context switching and multi-step mitigations. Additionally, tools aren’t always equipped to find problems in older code, either. Only scanning the new changes in your pipeline maximizes performance, but this can allow oversights to occur as more vulnerabilities become known. Similarly, developers have to refamiliarize themselves with old work whenever a vulnerability impacts it. This is a cognitive burden that further increases the fix’s overall time and effort. All too often, these problems add up to an inefficient security model that prevents timely patches and consumes developers’ productive hours. 



Quote for the day:

"Incompetence annoys me. Overconfidence terrifies me." -- Malcolm Gladwell

Daily Tech Digest - June 06, 2023

CISOs, IT lack confidence in executives’ cyber-defense knowledge

CISOs need to understand precisely how and where the two risk environments — corporate and personal — intersect to get ahead of this problem. Here are four things to work on to ensure key executives are protected outside the office environment.Be vigilant for changes in leadership and executive team risk profiles. These blind spots can be a CEO who makes frequent media appearances, has stock market dealings that are open to public scrutiny, or is simply well enough known to be included in social media conversations. Identify the company’s “crown jewels” that need to be protected. This needs to include an evaluation of potential risks, including through personal attack, and developing mitigation strategies. Ensure high-level executives get cybersecurity training. All staff should attend tailored awareness training which includes phishing simulation exercises and tabletop exercises, C-level and board executives included. Shared responsibilities. CISOs should work with other high-level executives that shared responsibility is being carried across, this means understanding shared risk.


Cyber spotlight falls on boardroom ‘privilege’ as incidents soar

“With the growth and increasing sophistication of social engineering, organisations must enhance the protection of their senior leadership now to avoid expensive system intrusions,” added Novak. “When you look at the grand scheme of social engineering, the reason we see this increasing is because it’s a relatively easy thing for a threat actor to throw out there and try to hit a lot of organisations with,” Novak told reporters during a pre-briefing session attended by Computer Weekly. “This ties back to being financially motivated – most of these events are about fraudulent movement of money and, typically, that results in them getting paid very quickly.” ... “Globally, cyber threat actors continue their relentless efforts to acquire sensitive consumer and business data. The revenue generated from that information is staggering, and it’s not lost on business leaders, as it is front and centre at the board level,” said IDC research vice-president Craig Robinson. The research team added that the fact many organisations continue to rely on distributed workforces added to the challenges faced by defenders in creating and, crucially, enforcing human-centric security best practice.


Will companies use low code to run their businesses?

Today's low code platforms typically provide a visual, drag-and-drop interface for building form-based applications, or tools to build a visual workflow. The resulting apps can be used to automate business processes, create mobile apps, and integrate with other systems. The aim of low code technology is to make application development much more accessible and efficient, so that organizations can better respond to changing business needs and stay competitive. I've seen a lot of other benefits in my discussions with CIOs, for whom low code was certainly not a topic that rose to their pay grade until the last couple of years. Now it's clear that low code can reduce dependencies on hard-to-find development talent, lower the cost of development while speeding it up, and reduce backlogs. ... Low code is becoming a central part of the future of IT, and there are now increasing proof points to show that low code adoption can successfully happen in a substantial, even comprehensive way in both IT and the business.


5 Must-Know Facts about 5G Network Security and Its Cloud Benefits

With its low latency, higher bandwidth, and extensive security measures, 5G strengthens the security of cloud connectivity. This upgrade enables secure and reliable transmission of sensitive information as well as real-time data processing. 5G allows organizations to confidently use cloud services to store and manage their data, reducing the risk of data breaches. 5G offers superior fault tolerance when compared to cable connections, primarily due to the inherent resilience of wireless channels in mitigating communication failures. With a cable connecting an office or factory to a provider, it might be necessary to build a backup connection through an optical fiber or radio. But 5G has a reserved channel from the outset. If one base station fails, others will take over automatically, making downtime unlikely. In addition, 5G network slicing capabilities provide companies with dedicated virtual networks within their IT system. This enables better isolation and segregation of data, applications, and services, improving overall security.


Private 5G might just make you rethink your wireless options

“Cal Poly is a data-laden environment where, to unlock the true value of that data, the data must constantly move to where it is needed,” said Bill Britton, Cal Poly’s vice president for IT services and CIO. Unfortunately, the university’s legacy Wi-Fi networks were straining under the weight of that data. Before investigating 5G options, Cal Poly’s IT team audited their networks to see how, where, and why data overloaded existing networks. They tracked usage down to the component level and found things like a single Xbox downloading close to 2 terabytes of data, as a single student’s console served as a gaming hub for more than 1,500 other people worldwide, all gobbling up Cal Poly bandwidth. “What happens if an Xbox is consuming that much bandwidth during registration or final exams?” Britton asked. “There’s a myth that you can just add more bandwidth, but with Wi-Fi, the infrastructure itself will always be the major limiting factor,” he said. Without costly traffic management add-ons, legacy Wi-Fi has severe limitations, including issues with hand-offs, interference, and the insufficient roaming capabilities.


How to Boost Cybersecurity Through Better Communication

Cybersecurity feels like war. And that naturally leads to cybersecurity staff forming a combative mindset. Tasked with securing a massive and growing cybersecurity attack surface, constantly evolving threat landscape, vulnerability-prone software, insider threats, new and unprecedented challenges (like the recent shift to remote work), limited budgets, a persistent skills shortage and general understaffing and other constraints — users just seem like another set of problems coming at you. ... The larger conversation between cybersecurity staff and employees feels like the security pros have one set of objectives (preventing and dealing with cyberattacks) that feel at odds with the objectives of everyone else in the organization (winning customers, earning profits, achieving growth goals, minimizing customer loss and many others). The big picture is that the larger goals of the organization are shared goals. All those business objectives depend on cybersecurity — security is part of what makes them possible. By focusing on shared objectives, users will partner more readily.


4 Big Regulatory Issues To Ponder in 2023

Ensuring regulatory compliance can feel like a delicate juggling act. Large enterprises with operations in multiple states and countries are faced with a patchwork of laws that are evolving in an attempt to keep up with today’s proliferation of data and technology. “It’s challenging to stay on top of what seems to be a never-ending list of new requirements, some of which overlap but do not align,” Hodge says. Enterprises may not even have the necessary knowledge to understand where they stand with regulatory compliance. “Many companies don’t even know everywhere sensitive data resides in their technical stack. Companies that had to comply with GDPR or CCPA may have done proper data mapping, but most haven’t. This generally tends to be the most time- and resource-intensive,” according to Robin Andruss, chief privacy officer at data privacy company Skyflow. Budgetary and staffing constraints complicate that juggling act. Enterprises need technology, people, and training to keep up with compliance. Getting an adequate share of the budget for those resources can be particularly challenging for smaller companies.


Generative AI and the future of HR

Generative technology can actually pull on the skills that are required to be successful in the job. That’s not to say managers don’t need to check the end product. They’ll need to be that human in the loop to make sure the job requirement is a good one. But gen AI can dramatically improve speed and quality. The other application in recruiting is candidate personalization. Right now, if you’re an organization with tens of thousands of applicants, you may or may not have super customized ways of reaching out to the people who have applied. With generative AI, you can include much more personalization about the candidate, the job, and what other jobs may be available if there’s a reason the applicant isn’t a fit. All those things are made immensely easier and faster through generative AI. ... The best application of gen AI is in large skill pools where you’re trying to fill a reasonably well-known job. We need a more productive and efficient way to navigate all the profiles coming through. Where it makes me a little anxious is anytime it’s a novel job—a new role—or even, in US law, a job that’s changed more than 25 percent or 33 percent. 


How to move the needle on innovation

“You can’t talk about innovation without considering culture, but I view that in a very practical fashion: it’s got to be more than philosophy and ideology,” says Marchand. “Creating the right culture has to start at the top with an appreciation for and a dedication to innovation.” In considering the innovation-savvy leaders with whom she has worked, Marchand finds that they all have a passion for problem-solving, an insatiable sense of curiosity, and a willingness to embrace change. “They like to be involved in transformations and don’t mind a little bit of ambiguity,” she says. “They also have an appreciation for the fact that even though they’re there to support the shareholders, they’re going to enable innovation—new products, services, and ideas—to flourish.” Weaving innovation into the business. Enabling innovation includes devoting resources to innovation in an integrated manner. “One major pharma company created a little startup unit staffed by its ten best project managers and gave them [US]$20 million and 18 months to see what they could come up with,” recalls Marchand. 


If You Want to Deliver Fast, Your Tests Have the Last Word

We need to have something that doesn’t change, that feels safe and that frees our mind from the burden of thinking whether or not it actually fits. We enter autopilot mode. The problem with that is that we want software development to behave like an assembly line: once the assembly line is built, we never touch it. We operate in the same way all the time. That may work with our CI/CD lanes for a while, but sadly it doesn’t always work well with our code. It even gets worse because sometimes the message is transmitted so many times that it loses its essence and at some point, we take that practice as part of our identity, we defend it, and we don’t let different points of view in. ... We try to achieve this responsiveness with practices of different natures: technical, such as CI/CD (Continuous Integration/Continuous Deployment), and strategic, such as developing in iterations. However, we often forget about agility when we deal with the core of Software Development: coding. Imagine preparing your favorite meal or dessert without the main ingredient of the recipe.



Quote for the day:

"Rank does not confer privilege or give power. It imposes responsibility." -- Peter F. Drucker

Daily Tech Digest - June 05, 2023

How to create generative AI confidence for enterprise success

The key to enterprise-ready generative AI is in rigorously structuring data so that it provides proper context, which can then be leveraged to train highly refined large language models (LLMs). A well-choreographed balance between polished LLMs, actionable automation and select human checkpoints forms strong anti-hallucination frameworks that allow generative AI to deliver correct results that create real B2B enterprise value. ... The initial phase of any company’s system is the blank slate that ingests information tailored to a company and its specific goals. The middle phase is the heart of a well-engineered system, which includes rigorous LLM fine-tuning. OpenAI describes fine-tuning models as “a powerful technique to create a new model that’s specific to your use case.” This occurs by taking generative AI’s normal approach and training models on many more case-specific examples, thus achieving better results. In this phase, companies have a choice between using a mix of hard-coded automation and fine-tuned LLMs. 


Governments worldwide grapple with regulation to rein in AI dangers

Although a number of countries have begun to draft AI regulations, such efforts are hampered by the reality that lawmakers constantly have to play catchup to new technologies, trying to understand their risks and rewards. “If we refer back to most technological advancements, such as the internet or artificial intelligence, it’s like a double-edged sword, as you can use it for both lawful and unlawful purposes,” said Felipe Romero Moreno, a principal lecturer at the University of Hertfordshire’s Law School whose work focuses on legal issues and regulation of emerging technologies, including AI. AI systems may also do harm inadvertently, since humans who program them can be biased, and the data the programs are trained with may contain bias or inaccurate information. “We need artificial intelligence that has been trained with unbiased data,” Romero Moreno said. “Otherwise, decisions made by AI will be inaccurate as well as discriminatory.”


10 notable critical infrastructure cybersecurity initiatives in 2023

In April, a group of OT security companies that usually compete with one another announced they were setting aside their rivalries to collaborate on a new vendor-neutral, open-source, and anonymous OT threat warning system called ETHOS (Emerging Threat Open Sharing). Formed as a nonprofit, ETHOS aims to share data on early threat indicators and discover new and novel attacks threatening industrial organizations that run essential services, including electricity, water, oil and gas production, and manufacturing systems. It has already gained US CISA endorsement, a boost that could give the initiative greater traction. All organizations, including public and private asset owners, can contribute to ETHOS at no cost, and founders envisage it evolving along the lines of open-source software Linux. ETHOS community and board members include some of the top OT security companies 1898 & Co., ABS Group, Claroty, Dragos, Forescout, NetRise, Network Perception, Nozomi Networks, Schneider Electric, Tenable, and Waterfall Security.


UK has time limit on ensuring cryptocurrency regulatory leadership

The report also said that interest in digital assets among investors and the general public led to the conclusion that cryptocurrency is more than a fad and is here to stay, and that cross-government planning is required if the UK wants to take the opportunities it offers. These recommendations followed contributions from the crypto sector regulators, industry experts and the general public. The report said: “Other countries around the world are moving quickly to develop clear regulatory frameworks for cryptocurrency and digital assets. The UK must move within a finite window of opportunity within the next 12-18 months to ensure early leadership within this sector.” Scottish Nationalist Party MP and chair of the APPG, Lisa Cameron MP, said: “This is the first report of its kind compiled jointly involving Members of Parliament and the House of Lords and we are keen that it contributes to evidence-based policy development across the sector.


3 things CIOs must do now to accurately hit net-zero targets

One of the immediate efforts CIOs can take to accelerate sustainability goals includes selecting energy-efficient software, which can have a major impact on energy consumption. Uniting Technology and Sustainability surveyedcompanies that said they were taking various approaches to incorporate sustainability throughout the software development lifecycle. ... This opportunity to collaborate with sustainability in mind extends to the influence CIOs hold over where and how employees work. By integrating remote working capabilities, the CIO plays a hand in an organization’s shift to an increasingly remote or hybrid workforce model—a move that can significantly reduce a company’s carbon footprint. This effort has the potential to not only create sustainability at scale, but increase employee satisfaction, which will power a more sustainable organization. ... CEOs believe new technology will allow them to reach sustainability goals and build resilience, with 55% of CEOs enhancing sustainability data collection capabilities, and 48% transitioning to a cloud infrastructure.


Serverless is the future of PostgreSQL

Shamgunov sees two primary benefits to running PostgreSQL serverless. The first is that developers no longer need to worry about sizing. All the developer needs is a connection string to the database without worrying about size/scale. Neon takes care of that completely. The second benefit is consumption-based pricing, with the ability to scale down to zero (and pay zero). This ability to scale to zero is something that AWS doesn’t offer, according to Ampt CEO Jeremy Daly. Even when your app is sitting idle, you’re going to pay. But not with Neon. As Shamgunov stresses in our interview, “In the SQL world, making it truly serverless is very, very hard. There are shades of gray” in terms of how companies try to deliver that serverless promise of scaling to zero, but only Neon currently can do so, he says. Do people care? The answer is yes, he insists. “What we’ve learned so far is that people really care about manageability, and that’s where serverless is the obvious winner. [It makes] consumption so easy. All you need to manage is a connection stream.” 


Cloud conundrum: The changing balance of microservices and monolithic applications

Containers and microservices are great for applications that can put everything together in a single place, and make it easier for developers to run across many different platforms and computing equipment. Containers are also better at scaling up and down an application than starting and stopping a whole bunch of VMs, since they take fraction of seconds to bring up, versus minutes for a VM. But there are still tradeoffs. Here is one way to describe the situation: “The microservices architecture is more beneficial for complex and evolving applications. But if you have a small engineering team aiming to develop a simple and lightweight application, there is no need to implement them.” But it would be wise not to discount VMs entirely. They can be an important stepping stone from the on-premises world, as Southwire Co. LLC’s Chief Information Officer Dan Stuart told SiliconANGLE in a recent interview. “We had a lot of old technology in our data center and were already familiar with VMware, so that made the move to Google’s Cloud easier,” he said.


A Case for Event-Driven Architecture With Mediator Topology

The most straightforward cases for reliability involve the converter services. The service locks a message in the queue when it starts processing and deletes it when it has finished its work and sent the result. If the service crashes, the message will become available again in the queue after a short timeout and can be processed by another instance of the converter. If the load grows faster than new instances are added or there are problems with the infrastructure, messages accumulate in the queue. They will be processed right after the system stabilizes. In the case of the Mediator, all the heavy lifting is again done by the Workflow Core library. Because all running workflows and their state are stored in the database, if an abnormal termination of the service occurs, the workflows will continue execution from the last recorded state. Also, we have configurations to retry failed steps, timeouts, alternative scenarios, and limits on the maximum number of parallel workflows. What’s more, the entire system is idempotent, allowing every operation to be retried safely without side effects and mitigating the concern of duplicate messages being received.


GDPR — How does it impact AI?

It is no surprise that legislation has lagged behind the unprecedented rise of AI, but this is where leaning more on data protection regulation may help to fill an important gap in the meantime. Another factor that has completely altered the landscape in the past five years is the UK’s exit from the EU, which brought additional complexities for the effective monitoring of personal data; while ‘UK GDPR’ is largely the same as the EU version it does carry some slight differences which make it an imperative for companies to increase education around data usage to understand the new policy landscape and avoid running afoul of these differences. ... Looking ahead, although the landscape has undoubtedly become far more complex, I remain a firm believer that the GDPR and AI can still work successfully in tandem as long as rigorous measures, checks and best practices are embedded firmly into business strategies and on the proviso that AI-related policy also evolves as a way to supplement existing data regulations.


The metaverse: Not dead yet

“We are in a winter for the metaverse, and how long that chill lasts remains to be seen,” said J.P. Gownder, vice president and principal analyst on Forrester's Future of Work team. Late last year, the analyst firm predicted a drop-off in interest during 2023 as a more realistic picture of the technology’s current possibilities emerged. “The hype was way exceeding the reality of the capabilities of the technology, the interest from customers — both business and consumer — and just the overall maturity of the market.” Yet the metaverse concept isn’t going away. “We think that, in the future, something like the metaverse will exist, whereby we have a 3D experience layer over the internet,” said Gownder. Don’t expect this to happen any time soon, though: the development of the metaverse could take a decade, according to Forrester. ... As metaverse hype subsides, the underlying technologies continue to develop and evolve, on both the hardware and software front. ... “There continues to be steady development of metaverse-type concepts. But just like we saw with the march to autonomous vehicles, this takes a long time to mature and put into place,” Lightman said.



Quote for the day:

“Being a leader, at its core, is about how we show up each day to work with the people in our charge.” -- Claudio Toyama