Daily Tech Digest - June 24, 2022

Toward data dignity: Let’s find the right rules and tools for curbing the power of Big Tech

Enlightened new policies and legislation, building on blueprints like the European Union’s GDPR and California’s CCPA, are a critical start to creating a more expansive and thoughtful formulation for privacy. Lawmakers and regulators need to consult systematically with technologists and policymakers who deeply understand the issues at stake and the contours of a sustainable working system. That was one of the motivations behind the creation of the  >Ethical Tech Project —to gather like-minded ethical technologists, academics, and business leaders to engage in that intentional dialogue with policymakers. We are starting to see elected officials propose regulatory bodies akin to what the Ethical Tech Project was designed to do—convene tech leaders to build standards protecting users against abuse. A recently proposed federal watchdog would be a step in the right direction to usher in proactive tech regulation and start a conversation between the government and the individuals who have the know-how to find and define the common-sense privacy solutions consumers need.


For HPC Cloud The Underlying Hardware Will Always Matter

For a large contingent of those ordinary enterprise cloud users, the belief is that a major benefit of the cloud is not thinking about the underlying infrastructure. But, in fact, understanding the underlying infrastructure is critical to unleashing the value and optimal performance of a cloud deployment. Even more so, HPC application owners need in-depth insight and therefore, a trusted hardware platform with co-design and portability built in from the ground up and solidified through long-running cloud provider partnerships. ... In other words, the standard lift-and-shift approach to cloud migration is not an option. The need for blazing fast performance with complex parallel codes means fine-tuning hardware and software. That’s critical for performance and for cost optimization, says Amy Leeland, director of hyperscale cloud software and solutions at Intel. “Software in the cloud isn’t always set by default to use Intel CPU extensions or embedded accelerators for optimal performance, even though it is so important to have the right software stack and optimizations to unlock the potential of a platform, even on a public cloud,” she explains.


NSA, CISA say: Don't block PowerShell, here's what to do instead

Defenders shouldn't disable PowerShell, a scripting language, because it is a useful command-line interface for Windows that can help with forensics, incident response and automating desktop tasks, according to joint advice from the US spy service the National Security Agency (NSA), the US Cybersecurity and Infrastructure Security Agency (CISA), and the New Zealand and UK national cybersecurity centres. ... So, what should defenders do? Remove PowerShell? Block it? Or just configure it? "Cybersecurity authorities from the United States, New Zealand, and the United Kingdom recommend proper configuration and monitoring of PowerShell, as opposed to removing or disabling PowerShell entirely," the agencies say. "This will provide benefits from the security capabilities PowerShell can enable while reducing the likelihood of malicious actors using it undetected after gaining access into victim networks." PowerShell's extensibility, and the fact that it ships with Windows 10 and 11, gives attackers a means to abuse the tool. 


How companies are prioritizing infosec and compliance

This study confirmed our long-standing theory that when security and compliance have a unified strategy and vision, every department and employee within the organization benefits, as does the business customer,” said Christopher M. Steffen, managing research director of EMA. Most organizations view compliance and compliance-related activities as “the cost of business,” something they have to do to conduct operations in certain markets. Increasingly, forward-thinking organizations are looking for ways to maximize their competitive advantage in their markets and having a best-in-class data privacy program or compliance program is something that more savvy customers are interested in, especially in organizations with a global reach. Compliance is no longer a “table stakes” proposition: comprehensive compliance programs focused on data security and privacy can be the difference in very tight markets and are often a deciding factor for organizations choosing one vendor over another.”


IDC Perspective on Integration of Quantum Computing and HPC

Quantum and classical hardware vendors are working to develop quantum and quantum-inspired computing systems dedicated to solving HPC problems. For example, using a co-design approach, quantum start-up IQM is mapping quantum applications and algorithms directly to the quantum processor to develop an application-specific superconducting computer. The result is a quantum system optimized to run particular applications such as HPC workloads. In collaboration with Atos, quantum hardware start-up, Pascal is working to incorporate its neutral-atom quantum processors into HPC environments. NVIDIA’s cuQuantum Appliance and cuQuantum software development kit provide enterprises the quantum simulation hardware and developer tools needed to integrate and run quantum simulations in HPC environments. At a more global level, the European High Performance Computing Joint Undertaking (EuroHPC JU) announced its funding for the High-Performance Computer and Quantum Simulator (HPCQS) hybrid project. 


Australian researchers develop a coherent quantum simulator

“What we’re doing is making the actual processor itself mimic the single carbon-carbon bonds and the double carbon-carbon bonds,” Simmons explains. “We literally engineered, with sub-nanometre precision, to try and mimic those bonds inside the silicon system. So that’s why it’s called a quantum analog simulator.” Using the atomic transistors in their machine, the researchers simulated the covalent bonds in polyacetylene. According to the SSH theory, there are two different scenarios in polyacetylene, called “topological states” – “topological” because of their different geometries. In one state, you can cut the chain at the single carbon-carbon bonds, so you have double bonds at the ends of the chain. In the other, you cut the double bonds, leaving single carbon-carbon bonds at the ends of the chain and isolating the two atoms on either end due to the longer distance in the single bonds. The two topological states show completely different behaviour when an electrical current is passed through the molecular chain. That’s the theory. “When we make the device,” Simmons says, “we see exactly that behaviour. So that’s super exciting.”


Is Kubernetes key to enabling edge workloads?

Lightweight and deployed in milliseconds, containers enable compatibility between different infrastructure environments and apps running across disparate platforms. Isolating edge workloads in containers protects them from cyber threats while microservices let developers update apps without worrying about platform-level dependencies. Benefits of orchestrating edge containers with Kubernetes include:Centralized Management — Users control the entire app deployment across on-prem, cloud, and edge environments through a single pane of glass. Accelerated Scalability — Automatic network rerouting and the capability to self-heal or replace existing nodes in case of failure remove the need for manual scaling. Simplified Deployment — Cloud-agnostic, DevOps-friendly, and deployable anywhere from VMs to bare metal environments, Kubernetes grants quick and reliable access to hybrid cloud computing. Resource Optimization — Kubernetes maximizes the use of available resources on bare metal and provides an abstraction layer on top of VMs optimizing their deployment and use.


Canada Introduces Infrastructure and Data Privacy Bills

The bill sets up a clear legal framework and details expectations for critical infrastructure operators, says Sam Andrey, a director at think tank Cybersecure Policy Exchange at Toronto Metropolitan University. The act also creates a framework for businesses and government to exchange information on the vulnerabilities, risks and incidents, Andrey says, but it does not address some other key aspects of cybersecurity. The bill should offer "greater clarity" on the transparency and oversight into what he says are "fairly sweeping powers." These powers, he says, could perhaps be monitored by the National Security and Intelligence Review Agency, an independent government watchdog. It lacks provisions to protect "good faith" researchers. "We would urge the government to consider using this law to require government agencies and critical infrastructure operators to put in place coordinated vulnerability disclosure programs, through which security researchers can disclose vulnerabilities in good faith," Andrey says.


Prioritize people during cultural transformation in 3 steps

Addressing your employees’ overall well-being is also critical. Many workers who are actively looking for a new job say they’re doing so because their mental health and well-being has been negatively impacted in their current role. Increasingly, employees are placing greater value on their well-being than on their salary and job title. This isn’t a new issue, but it’s taken on a new urgency since COVID pushed millions of workers into the remote workplace. For example, a 2019 Buffer study found that 19 percent of remote workers reported feeling lonely working from home – not surprising, since most of us were forced to severely limit our social interactions outside of work as well. Leaders can help address this by taking actions as simple as introducing more one-to-one meetings, which can boost morale. One-on-one meetings are essential to promoting ongoing feedback. When teams worked together in an office, communication was more efficient mainly because employees and managers could meet and catch up organically throughout the day.


Pathways to a Strategic Intelligence Program

Strong data visualization capabilities can also be a huge boost to the effectiveness of a strategic intelligence program because they help executive leadership, including the board, quickly understand and evaluate risk information. “There’s an overwhelming amount of data out there and so it’s crucial to be able to separate the signal from the noise,” he says. “Good data visualization tools allow you to do that in a very efficient, impactful and cost-effective manner, and to communicate information to busy senior leaders in a way that is most useful for them.” Calagna agrees that data visualization tools play an important role in bringing a strategic intelligence to life for leaders across functions within any organization, helping them to understand complex scenarios and insights more easily than narrative and other report forms may permit. “By quickly turning high data volumes into complex analyses, data visualization tools can enable organizations to relay near real-time insights and intelligence that support better informed decision-making,” she says. Data visualization tools can help monitor trends and assumptions that impact strategic plans and market forces and shifts that will inform strategic choices.



Quote for the day:

"Patience puts a crown on the head." -- Ugandan Proverb

Daily Tech Digest - June 23, 2022

Microsoft’s framework for building AI systems responsibly

AI systems are the product of many different decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, we need to proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date. The Standard details concrete goals or outcomes that teams developing AI systems must strive to secure. These goals help break down a broad principle like ‘accountability’ into its key enablers, such as impact assessments, data governance, and human oversight. Each goal is then composed of a set of requirements, which are steps that teams must take to ensure that AI systems meet the goals throughout the system lifecycle


Success Demands Sacrifice. What Are You Willing to Give Up?

The key is to preplan your sacrifices rather than sacrifice parts of your life by default. Look at your normal schedule and think about where you could find the extra time and energy for your business, without sacrificing the things you value most in life. Maybe you decide to stay up later after the kids are in bed to get work done. Maybe you stop binge-watching on Hulu so you could get to the gym. Maybe you give up that second round of golf each week to spend more time with your spouse. Maybe you leave the office for a couple of hours to catch your kid's soccer game and come back later. Maybe you sacrifice some money to get extra help in for the business. Maybe you stop micro-managing everything in your business and actually delegate more responsibility to others. We all have areas in where we spend our time that we can tweak. You just have to decide what's right for you. You'll always have to sacrifice something to build a business or accomplish anything extraordinary in life. But giving up what you value most is not a good trade-off. Make sure you're making smart sacrifices by giving up what doesn't matter for things that do.


Microsoft to retire controversial facial recognition tool that claims to identify emotion

The decision is part of a larger overhaul of Microsoft’s AI ethics policies. The company’s updated Responsible AI Standards (first outlined in 2019) emphasize accountability to find out who uses its services and greater human oversight into where these tools are applied. In practical terms, this means Microsoft will limit access to some features of its facial recognition services (known as Azure Face) and remove others entirely. Users will have to apply to use Azure Face for facial identification, for example, telling Microsoft exactly how and where they’ll be deploying its systems. Some use cases with less harmful potential (like automatically blurring faces in images and videos) will remain open-access. ... " “Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability,” wrote Microsoft’s chief responsible AI officer


The Unreasonable Effectiveness of Zero Shot Learning

OpenAI also has something for that. They have OpenAI CLIP, which stands for Contrastive Language-Image Pre-training. What this model does is that it brings together text and image embeddings. It generates an embedding for each text and it generates an embedding for each image, and these inputs are aligned to each other. The way this model was trained is that, for example, you have a set of images, like an image of a cute puppy. Then you have a set of text like, Pepper the Aussie Pup. The way it's trained is that hopefully the distance between the embedding of this picture of this puppy, and the embedding of the text, Pepper the Aussie Pup, that that is really close to each other. It's trained on 400 million image text pairs, which were scraped from the internet. You can imagine that someone did indeed put an image of a puppy on the internet, and didn't write under it, "This is Pepper the Aussie Pup."


Quantum Advantage in Learning from Experiments

Quantum computers will likely offer exponential improvements over classical systems for certain problems, but to realize their potential, researchers first need to scale up the number of qubits and to improve quantum error correction. What’s more, the exponential speed-up over classical algorithms promised by quantum computers relies on a big, unproven assumption about so-called “complexity classes” of problems — namely, that the class of problems that can be solved on a quantum computer is larger than those that can be solved on a classical computer.. It seems like a reasonable assumption, and yet, no one has proven it. Until it's proven, every claim of quantum advantage will come with an asterisk: that it can do better than any known classical algorithm. Quantum sensors, on the other hand, are already being used for some high-precision measurements and offer modest (and proven) advantages over classical sensors. Some quantum sensors work by exploiting quantum correlations between particles to extract more information about a system than it otherwise could have.


How AI is changing IoT

The cloud can’t scale proportionately to handle all the data that comes from IoT devices, and transporting data from the IoT devices to the cloud is bandwidth-limited. No matter the size and sophistication of the communications network, the sheer volume of data collected by IoT devices leads to latency and congestion. Several IoT applications rely on rapid, real-time decision-making such as autonomous cars. To be effective and safe, autonomous cars need to process data and make instantaneous decisions (just like a human being). They can’t be limited by latency, unreliable connectivity, and low bandwidth. Autonomous cars are far from the only IoT applications that rely on this rapid decision making. Manufacturing already incorporates IoT devices, and delays or latency could impact the processes or limit capabilities in the event of an emergency. In security, biometrics are often used to restrict or allow access to specific areas. Without rapid data processing, there could be delays that impact speed and performance, not to mention the risks in emergent situations.


A Huge Step Forward in Quantum Computing Was Just Announced: The First-Ever Quantum Circuit

The landmark discovery, published in Nature today, was nine years in the making. "This is the most exciting discovery of my career," senior author and quantum physicist Michelle Simmons, founder of Silicon Quantum Computing and director of the Center of Excellence for Quantum Computation and Communication Technology at UNSW told ScienceAlert. Not only did Simmons and her team create what's essentially a functional quantum processor, they also successfully tested it by modeling a small molecule in which each atom has multiple quantum states – something a traditional computer would struggle to achieve. This suggests we're now a step closer to finally using quantum processing power to understand more about the world around us, even at the tiniest scale. "In the 1950s, Richard Feynman said we're never going to understand how the world works – how nature works – unless we can actually start to make it at the same scale," Simmons told ScienceAlert. "If we can start to understand materials at that level, we can design things that have never been made before.


How to Handle Third-Party Cyber Incident Response

With tier-1 support, you have someone watching the stuff that is running. Their setup alerts them to the fact that something bad happened. They're gonna turn into a tier-2 person and say, “Hey, can you check this out and see if it really is something bad?” And so the tier-2 person takes a look. Maybe they'll take a look at that laptop or that part of the network or a server. If it wasn't a false alert, and it looks like bad behavior, then it goes to tier 3. Typically, the person running that is much more detailed and technical. They'll do a forensic analysis. And they look at all of the bits that are moving: the communication and what happened. They know adversary tactics, techniques, and procedures (TTP). They’re really good at tracking the adversary in the environment. When you're looking for a third-party incident response, and support agreement, you have to know what you, as a company, have the skills to do. Then you contract out for tier 2 or tier 3. They're going to come in and provide support. Service level agreements are critical. What are you expecting? The more you want, the more you're going to pay. 


IT leadership: 3 ways CIOs prevent burnout

“Prioritize yourself. It is not selfish; it’s an act of self-care. Set aside an ‘hour of power’ every day, first thing in the morning. During this hour, go analog and keep all digital distractions away. Protect that time fiercely and find an activity that nourishes your mind. For instance, learn something new and exciting, read some non-fiction that is energizing and inspiring, journal, or meditate. Find what works for you and do it every day. “Get moving. A healthy mind needs a healthy body. Do something, anything, to get some physical activity into your day. If dancing to disco is your thing, turn up the volume and go for it. Posting it on TikTok is optional, and maybe not advisable. “Stay connected. You are not alone – no matter what you’re going through, someone else has experienced it. Showing vulnerability is not a weakness, it is a strength. Build and nurture a close group of trusted advisors, preferably outside your company. Build relationships before you need them. Don’t be afraid to ask for help. They can help you work through challenges and provide an avenue to help others on this journey.”


Zscaler Posture Control Correlates, Prioritizes Cloud Risks

Zscaler Posture Control wants to make it easier for developers to take a hands-on approach to keeping their companies safe and incorporate best security practices during the development stage, according to Chaudhry. He says Zscaler hopes that 10% of its more than 5,600 customers will be using the company's entire cloud workflow protection offering within the next year. "Doing patch management after the application is built is extremely hard," Chaudhry says. "It was important for us to make sure that the developers are taking a more active role in their part of the security implementation." Zscaler wants to learn from the 210 billion transactions it processes daily to better remediate risk on an ongoing basis, addressing everything from unpatched vulnerabilities and overprivileged entitlements to Amazon S3 buckets that have erroneously been left open, Chaudhry says. Zscaler will put data points from these transactions into its artificial intelligence model to better protect customers going forward.



Quote for the day:

"Leadership is the creation of an environment in which others are able to self-actualize in the process of completing the job." -- John Mellecker

Daily Tech Digest - June 22, 2022

What you need to know about site reliability engineering

What is site reliability engineering? The creator of the first site reliability engineering (SRE) program, Benjamin Treynor Sloss at Google, described it this way: Site reliability engineering is what happens when you ask a software engineer to design an operations team. What does that mean? Unlike traditional system administrators, site reliability engineers (SREs) apply solid software engineering principles to their day-to-day work. For laypeople, a clearer definition might be: Site reliability engineering is the discipline of building and supporting modern production systems at scale. SREs are responsible for maximizing reliability, performance availability, latency, efficiency, monitoring, emergency response, change management, release planning, and capacity planning for both infrastructure and software. ... SREs should be spending more time designing solutions than applying band-aids. A general guideline is for SREs to spend 50% of their time in engineering work, such as writing code and automating tasks. When an SRE is on-call, time should be split between about 25% of time managing incidents and 25% on operations duty.


Are blockchains decentralized?

Over the past year, Trail of Bits was engaged by the Defense Advanced Research Projects Agency (DARPA) to examine the fundamental properties of blockchains and the cybersecurity risks associated with them. DARPA wanted to understand those security assumptions and determine to what degree blockchains are actually decentralized. To answer DARPA’s question, Trail of Bits researchers performed analyses and meta-analyses of prior academic work and of real-world findings that had never before been aggregated, updating prior research with new data in some cases. They also did novel work, building new tools and pursuing original research. The resulting report is a 30-thousand-foot view of what’s currently known about blockchain technology. Whether these findings affect financial markets is out of the scope of the report: our work at Trail of Bits is entirely about understanding and mitigating security risk. The report also contains links to the substantial supporting and analytical materials. Our findings are reproducible, and our research is open-source and freely distributable. So you can dig in for yourself.


Why The Castle & Moat Approach To Security Is Obsolete

At first, the shift in security strategy went from protecting one, single castle to a “multiple castle” approach. In this scenario, you’d treat each salesperson’s laptop as a sort of satellite castle. SaaS vendors and cloud providers played into this idea, trying to convince potential customers not that they needed an entirely different way to think about security, but rather that, by using a SaaS product, they were renting a spot in the vendor’s castle. The problem is that once you have so many castles, the interconnections become increasingly more difficult to protect. And it’s harder to say exactly what is “inside” your network versus what is hostile wilderness. Zero trust assumes that the castle system has broken down completely, so that each individual asset is a fortress of one. Everything is always hostile wilderness, and you operate under the assumption that you can implicitly trust no one. It’s not an attractive vision for society, which is why we should probably retire the castle and moat metaphor.  Because it makes sense to eliminate the human concept of trust in our approach to cybersecurity and treat every user as potentially hostile.


Improving AI-based defenses to disrupt human-operated ransomware

Disrupting attacks in their early stages is critical for all sophisticated attacks but especially human-operated ransomware, where human threat actors seek to gain privileged access to an organization’s network, move laterally, and deploy the ransomware payload on as many devices in the network as possible. For example, with its enhanced AI-driven detection capabilities, Defender for Endpoint managed to detect and incriminate a ransomware attack early in its encryption stage, when the attackers had encrypted files on fewer than four percent (4%) of the organization’s devices, demonstrating improved ability to disrupt an attack and protect the remaining devices in the organization. This instance illustrates the importance of the rapid incrimination of suspicious entities and the prompt disruption of a human-operated ransomware attack. ... A human-operated ransomware attack generates a lot of noise in the system. During this phase, solutions like Defender for Endpoint raise many alerts upon detecting multiple malicious artifacts and behavior on many devices, resulting in an alert spike.


Reexamining the “5 Laws of Cybersecurity”

The first rule of cybersecurity is to treat everything as if it’s vulnerable because, of course, everything is vulnerable. Every risk management course, security certification exam, and audit mindset always emphasizes that there is no such thing as a 100% secure system. Arguably, the entire cybersecurity field is founded on this principle. ... The third law of cybersecurity, originally popularized as one of Brian Krebs’ 3 Rules for Online Safety, aims to minimize attack surfaces and maximize visibility. While Krebs was referring only to installed software, the ideology supporting this rule has expanded. For example, many businesses retain data, systems, and devices they don’t use or need anymore, especially as they scale, upgrade, or expand. This is like that old, beloved pair of worn out running shoes that sit in a closet. This excess can present unnecessary vulnerabilities, such as a decades-old exploit discovered in some open source software. ... The final law of cybersecurity states that organizations should prepare for the worst. This is perhaps truer than ever, given how rapidly cybercrime is evolving. The risks of a zero-day exploit are too high for businesses to assume they’ll never become the victims of a breach.


How to Adopt an SRE Practice (When You’re not Google)

At a very high level, Google defines the core of SRE principles and practices as an ability to ’embrace risk.’ Site reliability engineers balance the organizational need for constant innovation and delivery of new software with the reliability and performance of production environments. The practice of SRE grows as the adoption of DevOps grows because they both help balance the sometimes opposing needs of the development and operations teams. Site reliability engineers inject processes into the CI/CD and software delivery workflows to improve performance and reliability but they will know when to sacrifice stability for speed. By working closely with DevOps teams to understand critical components of their applications and infrastructure, SREs can also learn the non-critical components. Creating transparency across all teams about the health of their applications and systems can help site reliability engineers determine a level of risk they can feel comfortable with. The level of desired service availability and acceptable performance issues that you can reasonably allow will depend on the type of service you support as well.


Are Snowflake and MongoDB on a collision course?

At first blush, it looks like Snowflake is seeking to get the love from the crowd that put MongoDB on the map. But a closer look is that Snowflake is appealing not to the typical JavaScript developer who works with a variable schema in a document database, but to developers who may write in various languages, but are accustomed to running their code as user-defined functions, user-defined table functions or stored procedures in a relational database. There’s a similar issue with data scientists and data engineers working in Snowpark, but with one notable exception: They have the alternative to execute their code through external functions. That, of course, prompts the debate over whether it’s more performant to run everything inside the Snowflake environment or bring in an external server – one that we’ll explore in another post. While document-oriented developers working with JSON might perceive SQL UDFs as foreign territory, Snowflake is making one message quite clear with the Native Application Framework: As long as developers want to run their code in UDFs, they will be just as welcome to profit off their work as the data folks.


Fermyon wants to reinvent the way programmers develop microservices

If you’re thinking the solution sounds a lot like serverless, you’re not wrong, but Matt Butcher, co-founder and CEO at Fermyon, says that instead of forcing a function-based programming paradigm, the startup decided to use WebAssembly, a much more robust programming environment, originally created for the browser. Using WebAssembly solved a bunch of problems for the company including security, speed and efficiency in terms of resources. “All those things that made it good for the browser were actually really good for the cloud. The whole isolation model that keeps WebAssembly from being able to attack the hosts through the browser was the same kind of [security] model we wanted on the cloud side,” Butcher explained. What’s more, a WebAssembly module could download really quickly and execute instantly to solve any performance questions, and finally instead of having a bunch of servers that are just sitting around waiting in case there’s peak traffic, Fermyon can start them up nearly instantly and run them on demand.


Metaverse Standards Forum Launches to Solve Interoperability

According to Trevett, the new forum will not concern itself with philosophical debates about what the metaverse will be in 10-20 years time. However, he thinks the metaverse is “going to be a mixture of the connectivity of the web, some kind of evolution of the web, mixed in with spatial computing.” He added that spatial computing is a broad term, but here refers to “3D modeling of the real world, especially in interaction through augmented and virtual reality.” “No one really knows how it’s all going to come together,” said Trevett. “But that’s okay. For the purposes of the forum, we don’t really need to know. What we are concerned with is that there are clear, short-term interoperability problems to be solved.” Trevett noted that there are already multiple standards organizations for the internet, including of course the W3C for web standards. What MSF is trying to do is help coordinate them, when it comes to the evolving metaverse. “We are bringing together the standards organizations in one place, where we can coordinate between each other but also have good close relationships with the industry that [is] trying to use our standards,” he said.


What We Now Know: Digital Transformation Reaches a Point of Clarity

Technology adoption, as part of digital transformation initiative, is generally of a greater scale and impact than what most are accustomed to, primarily because we are looking not only to revamp parts of our IT enterprise, but to also introduce brand new technology architecture environments comprised of a combination of heavy-duty systems. In addition to the due diligence that comes which planning for and incorporating new technology innovations, with digital transformation initiatives we need to be extra careful not to be lured into over-automation. The reengineering and optimization of our business processes in support of enhancing productivity and customer-centricity need to be balanced with practical considerations and the opportunity to first prove that a given enhancement is actually effective with our customers before building enhancements upon it. If we automate too much too soon, it will be painful to roll back, both financially and organizationally. Laying out a phased approach will avoid this.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell