Daily Tech Digest - June 13, 2022

The Increasingly Graphic Nature Of Intel Datacenter Compute

What customers are no doubt telling Intel and AMD is that they want highly tuned pieces of hardware co-designed with very precise workloads, and that they will want them at much lower volumes for each multi-motor configuration than chip makers and system builders are used to. Therefore, these compute engine complexes we call servers will carry higher unit costs than chip makers and system builders are used to, but not necessarily with higher profits. In fact, quite possibly with lower profits, if you can believe it. This is why Intel is taking a third whack at discrete GPUs with its Xe architecture and significantly with the “Ponte Vecchio” Xe HPC GPU accelerator that is at the “Aurora” supercomputer at Argonne National Laboratory. And this time the architecture of the GPUs is a superset of the integrated GPUs for its laptops and desktops, not some Frankenstein X86 architecture that is not really tuned for graphics even if it could be used as a massively parallel compute engine in a way that GPUs have been transformed from Nvidia and AMD. 


Under the hood: Meta’s cloud gaming infrastructure

Our goal within each edge computing site is to have a unified hosting environment to make sure we can run as many games as possible as smoothly as possible. Today’s games are designed for GPUs, so we partnered with NVIDIA to build a hosting environment on top of NVIDIA Ampere architecture-based GPUs. As games continue to become more graphically intensive and complex, GPUs will provide us with the high fidelity and low latency we need for loading, running, and streaming games. To run games themselves, we use Twine, our cluster management system, on top of our edge computing operating system. We build orchestration services to manage the streaming signals and use Twine to coordinate the game servers on edge. We built and used container technologies for both Windows and Android games. We have different hosting solutions for Windows and Android games, and the Windows hosting solution comes with the integration with PlayGiga. We’ve built a consolidated orchestration system to manage and run the games for both operating systems.


Google AI Introduces ‘LIMoE’

A typical Transformer comprises several “blocks,” each containing several distinct layers. A feed-forward network is one of these layers (FFN). This single FFN is replaced in LIMoE and the works described above by an expert layer with multiple parallel FFNs, each of which is an expert. A primary router predicts which experts should handle which tokens, given a series of passes to process. ... The model’s price is comparable to the regular Transformer model if only one expert is activated. LIMoE performs exactly that, activating one expert per case and matching the dense baselines’ computing cost. The LIMoE router, on the other hand, may see either image or text data tokens. When MoE models try to deliver all tokens to the same expert, they fail uniquely. Auxiliary losses, or additional training objectives, are commonly used to encourage balanced expert utilization. Google AI team discovered that dealing with numerous modalities combined with sparsity resulted in novel failure modes that conventional auxiliary losses could not solve. To address this, they created additional losses.


Stop Splitting Yourself in Half: Seek Out Work-Life Boundaries, Not Balance

What makes boundaries different from balance? Balance implies two things that aren't equal that you're constantly trying to make equal. It creates the expectation of a clear-cut division. A work-life balance fails to acknowledge that you are a whole person, and sometimes things can be out of balance without anything being wrong. Sometimes you'll spend days, weeks and even whole seasons of life choosing to lean more into one part of your life than the other. Boundaries ask you to think about what's important to you, what drives you, and what authenticity looks like for you. Boundaries require self-awareness and self-reflection, along with a willingness and ability to prioritize. Those qualities help you to be more aware and more capable of making decisions at a given moment. By establishing boundaries grounded in your priorities, you're more equipped to make choices. Boundaries empower you to say, "This is what I'm choosing right now. I need to be fully here until this is done." Boundaries aren't static, either. 


Why it’s time for 'data-centric artificial intelligence'

AI systems need both code and data, and “all that progress in algorithms means it's actually time to spend more time on the data,” Ng said at the recent EmTech Digital conference hosted by MIT Technology Review. Focusing on high-quality data that is consistently labeled would unlock the value of AI for sectors such as health care, government technology, and manufacturing, Ng said. “If I go see a health care system or manufacturing organization, frankly, I don't see widespread AI adoption anywhere.” This is due in part to the ad hoc way data has been engineered, which often relies on the luck or skills of individual data scientists, said Ng, who is also the founder and CEO of Landing AI. Data-centric AI is a new idea that is still being discussed, Ng said, including at a data-centric AI workshop he convened last December. ... Data-centric AI is a key part of the solution, Ng said, as it could provide people with the tools they need to engineer data and build a custom AI system that they need. “That seems to me, the only recipe I'm aware of, that could unlock a lot of this value of AI in other industries,” he said.


How Do We Utilize Chaos Engineering to Become Better Cloud-Native Engineers?

The main goal of Chaos Engineering is as explained here: “Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.” The idea of Chaos Engineering is to identify weaknesses and reduce uncertainty when building a distributed system. As I already mentioned above, building distributed systems at scale is challenging, and since such systems tend to be composed of many moving parts, leveraging Chaos Engineering practices to reduce the blast radius of such failures, proved itself as a great method for that purpose. We leverage Chaos Engineering principles to achieve other things besides its main objective. The “On-call like a king” workshops intend to achieve two goals in parallel—(1) train engineers on production failures that we had recently; (2) train engineers on cloud-native practices, tooling, and how to become better cloud-native engineers!


The 3 Phases of Infrastructure Automation

Manually provisioning and updating infrastructure multiple times a day from different sources, in various clouds or on-premises data centers, using numerous workflows is a recipe for chaos. Teams will have difficulty collaborating or even sharing a view of the organization’s infrastructure. To solve this problem, organizations must adopt an infrastructure provisioning workflow that stays consistent for any cloud, service or private data center. The workflow also needs extensibility via APIs to connect to infrastructure and developer tools within that workflow, and the visibility to view and search infrastructure across multiple providers. ... The old-school, ticket-based approach to infrastructure provisioning makes IT into a gatekeeper, where they act as governors of the infrastructure but also create bottlenecks and limit developer productivity. But allowing anyone to provision infrastructure without checks or tracking can leave the organization vulnerable to security risks, non-compliance and expensive operational inefficiencies.


Questioning the ethics of computer chips that use lab-grown human neurons

While silicon computers transformed society, they are still outmatched by the brains of most animals. For example, a cat’s brain contains 1,000 times more data storage than an average iPad and can use this information a million times faster. The human brain, with its trillion neural connections, is capable of making 15 quintillion operations per second. This can only be matched today by massive supercomputers using vast amounts of energy. The human brain only uses about 20 watts of energy, or about the same as it takes to power a lightbulb. It would take 34 coal-powered plants generating 500 megawatts per hour to store the same amount of data contained in one human brain in modern data storage centres. Companies do not need brain tissue samples from donors, but can simply grow the neurons they need in the lab from ordinary skin cells using stem cell technologies. Scientists can engineer cells from blood samples or skin biopsies into a type of stem cell that can then become any cell type in the human body.


How Digital Twins & Data Analytics Power Sustainability

Seeding technology innovation across an enterprise requires broader and deeper communication and collaboration than in the past, says Aapo Markkanen, an analyst in the technology and service providers research unit at Gartner. “There’s a need to innovate and iterate faster, and in a more dynamic way. Technology must enable processes such as improved materials science and informatics and simulations.” Digital twins are typically at the center of the equation, says Mark Borao, a partner at PwC. Various groups, such as R&D and operations, must have systems in place that allow teams to analyze diverse raw materials, manufacturing processes, and recycling and disposal options --and understand how different factors are likely to play out over time -- and before an organization “commits time, money and other resources to a project,” he says. These systems “bring together data and intelligence at a massive scale to create virtual mirrored worlds of products and processes,” Podder adds. In fact, they deliver visibility beyond Scope 1 and Scope 2 emissions, and into Scope 3 emissions.


API security warrants its own specific solution

If the API doesn’t apply sufficient internal rate limiting on parameters such as response timeouts, memory, payload size, number of processes, records and requests, attackers can send multiple API requests creating a denial of service (DoS) attack. This then overwhelms back-end systems, crashing the application or driving resource costs up. Prevention requires API resource consumption limits to be set. This means setting thresholds for the number of API calls and client notifications such as resets and lockouts. Server-side, validate the size of the response in terms of the number of records and resource consumption tolerances. Finally, define and enforce the maximum size of data the API will support on all incoming parameters and payloads using metrics such as the length of strings and number of array elements. Effectively a different spin on BOLA, this sees the attacker able to send requests to functions that they are not permitted to access. It’s effectively an escalation of privilege because access permissions are not enforced or segregated, enabling the attacker to impersonate admin, helpdesk, or a superuser and to carry out commands or access sensitive functions, paving the way for data exfiltration.



Quote for the day:

"To make a decision, all you need is authority. To make a good decision, you also need knowledge, experience, and insight." -- Denise Moreland

Daily Tech Digest - June 11, 2022

Cloud computing security: Where it is, where it's going

Most businesses use multiple cloud services and cloud providers, a hybrid approach that can support granular security options where vital data is kept close (perhaps in a private cloud) while less sensitive applications run in a public cloud to take advantage of big tech's economies of scale. But the hybrid model also introduces new complications, as every provider will have a slightly different set of security models that cloud customers will need to understand and manage. That takes time and (often elusive) expertise. But misconfigured services are high on the list of the causes for security incidents, along with even more basic failures like poor passwords and identity controls. Little surprise that companies are evaluating tools to automate much of this. That's leading to interest in new technologies such as Cloud Security Posture Management (CSPM) tools, which can help security teams spot and fix potential security issues around misconfiguration and compliance in the cloud, so they know the same rules are being enforced across their cloud services.


Jump Into the DevOps Pool: The Water Is Fine

If you’re thinking that becoming a member of a DevOps team sounds interesting, what are the things you need to consider? Having experience in just about any aspect of IT gives you the technical foundation to make yourself a viable candidate. Do some research. What does it take to hone your existing skills to become a successful member of a DevOps team? You’ll likely find that it takes you in a direction well within your reach. Your technical skills are just the beginning though. Your skills will contribute to the broader objective of the DevOps team. Valuable DevOps team members understand how their role fits into the bigger picture. It’s not necessary to know the details of another team member’s discipline. It is, however, important to understand how each of your roles contributes to the DevOps process. This implies that you take some time to learn about each role’s function. Becoming an invaluable DevOps team member goes one step further. DevOps engineers who possess or develop the interpersonal skills to work beyond their team in guiding others, become key players within an organization. 


How to prioritize cloud spending: 5 strategies for architects

The price of spot instances changes over days and weeks, so you can't predict the cost at the time of purchase. The amount of money saved varies depending on the type of resource: Low-priority instances are the least expensive, but they may be unavailable or turn off abruptly depending on capacity demand in the region. But such cases are rare. For example, AWS states that the average interruption frequency across all regions and instance types doesn't exceed 10%. Spot instances are best for stateless workloads, batch operations, and other fault-tolerant or time-flexible tasks. ... Begin by examining your cloud provider's transfer fees. Then, find ways to limit the number of data transfers in your cloud architecture. For example, you may need to change your application behavior and architecture to use computing resources in the closest data location. Transfer on-premises apps that often access cloud-hosted data to the cloud. In contrast to the cloud, specific resources (such as network bandwidth) are considered free in traditional datacenters. So if you move applications from on-premises datacenters, modify your application architecture to limit the amount of data transferred.


Defensive Cyber Attacks Declared Legal by UK AG

The move highlights a general lack of international agreement about when defensive cyber attacks should be considered appropriate. There has long been a murky world of online espionage in which countries have tacitly agreed to not respond with military force, due in no small part to degrees of plausible deniability and a great difficulty in displaying concrete evidence to the public that another nation’s covert hacking teams were behind a virtual break-in. This unofficial understanding has survived in the internet age, even as allies have been caught spying on each other, so long as everyone refrained from using cyber attacks to cause physical damage. Some developments in recent years have strained that arrangement, including Russia’s repeated cyber attacks on services in Ukraine and the recent willingness of cyber criminals to hit foreign critical infrastructure and government agencies with ransomware attacks. The UK AG has expressed that there is a pressing need to establish formal rules regarding defensive cyber attacks given the demonstrated possibility of devastating incidents that could cause actual damage to civilians, and that existing non-intervention agreements could serve as a launch point.


How AI can give companies a DEI boost

Although many companies are experimenting with AI as a tool to assess DEI in these areas, Greenstein noted, they aren’t fully delegating those processes to AI, but rather are augmenting them with AI. Part of the reason for their caution is that in the past, AI often did more harm than good in terms of DEI in the workplace, as biased algorithms discriminated against women and non-white job candidates. “There has been a lot of news about the impact of bias in the algorithms looking to identify talent,” Greenstein said. For example, in 2018, Amazon was forced to scrap its secret AI recruiting tool after the tech giant realized it was biased against women. And a 2019 study conducted by Harvard Business Review concluded that AI-enabled recruiting algorithms introduced anti-Black bias into the process. AI bias is caused, often unconsciously, by the people who design AI models and interpret the results. If an AI is trained on biased data, it will, in turn, make biased decisions. For instance, if a company has hired mostly white, male software engineers with degrees from certain universities in the past, a recruiting algorithm might favor job candidates with similar profiles for open engineering positions.


A CFO’s perspective on sustainable, inclusive growth

We’ve faced an ongoing health crisis that turned into a social crisis that went to an economic crisis and, unfortunately, we’re facing humanitarian crises, such as the war in Ukraine. But the fact of the matter is, people are making decisions, different decisions than where we were three to five years ago. And I believe they’re challenging the purpose of organizations, businesses, and leadership. As we talk about sustainability and inclusivity with that combination of the foundation for growth, that’s what the priorities of people are today. You asked about today’s CFOs and sustainability, inclusivity, growth. I truly believe that history will be written about these times that we’ve been operating in. As CFOs, we’re always—Eric, as you know quite well—focused on the what: productivity, efficiency, operational stability, liquidity. But I think these times will be less about pure financials and more about a culture. And when I think about culture, IBM—let me give a little shout out to my company—has a framework. We’ve been in existence for 111 years. We have a framework around culture that’s really grounded in purpose, united in values, and demonstrated through growth behaviors. 


Container adoption: 5 expert tips

“If you want to move beyond containers as a tool for developers and put them into production, that means you’ll also be adopting an orchestration layer like Kubernetes and the various monitoring, CI/CD, logging, and tracing tools that go with it,” Haff says. “Which is exactly what many organizations are doing.” Containers and Kubernetes tend to go hand-in-hand because without that orchestration layer, teams otherwise find that managing containers at any kind of scale in production requires untenable effort. Haff notes that 70 percent of IT leaders surveyed in the State of Enterprise Open Source 2022 report said their organizations were using Kubernetes. Speaking of open source, containerization has open source DNA – and adoption often leads to uptake of other open source technologies, too. Make sure you’re using up-to-date, reliable, and secure code. “Containerization leads to more use of open source and other public components,” Korren says. “There are a lot of useful, well-maintained code components on the Internet, but there are many that are not.”


Create End-To-End Integration Of Tools & Data For Flow Insight & Traceability

Without a long-term strategy or clearly assigned data-custody across the digital product lifecycle, data access and management is fragmented between process owners, application owners, or development teams, becoming more unstable with every company re-organization or staff departure. Many organizations reluctantly determine that data islands, duplicate data stores, and conflicting data are inevitable. The chain reaction of resulting issues is both overwhelming and costly. It may not be possible to do a meaningful root cause analysis to resolve incidents, assess the efficiency of digital product delivery, assess the value compared with cost, or receive valuable feedback from development before deployment. Design flaws are repeated, and incorrect processes are unintentionally reinforced. The lack of end-to-end visibility results in a slow response time to development, change, and incident tickets because there is no traceability or data integrity for tracking down the root cause of problems. Add that when data ownership is transferred or unclear, frustrated teams may dodge responsibility and throw issues “over the fence” to other stakeholders through the course of the digital product’s lifecycle.


Using Behavioral Analytics to Bolster Security

Josh Martin, product evangelist at security firm Cyolo, explains that behavioral analytics would not be possible without ML and AI. “The data collected from the detection phase will be fed into multiple AI and ML models that will allow for deeper inspection of access habits to detect patterns or outliers for specific users,” he says. He outlines a potential use case for behavioral analytics and zero trust focused on a team member working from home. This user logs in every day from their corporate Mac around 8:00 in the morning and will either log into Salesforce or O365 first thing. “Considering this is normal for the user, the AI/ML mechanisms will start to look for anything outside of this baseline,” Martin says. “So, when the user takes a vacation to a different state and uses a personal Windows laptop to access ADP around 10 o’clock at night, this would raise a flag and shut down further authentication attempts until a security analyst can investigate. In this case, it could have been a malicious entity using stolen credentials to access payroll information.” From his perspective, behavioral analytics is likely to become the new norm as AI/ML products and knowledge become more accessible to the masses.


Rekindling the thrill of programming

We could say that programming is an activity that moves between the mental and the physical. We could even say it is a way to interact with the logical nature of reality. The programmer blithely skips across the mind-body divide that has so confounded thinkers. “This admitted, we may propose to execute, by means of machinery, the mechanical branch of these labours, reserving for pure intellect that which depends on the reasoning faculties.” So said Charles Babbage, originator of the concept of a digital programmable computer. Babbage was conceiving of computing in the 1800s. Babbage and his collaborator Lovelace were conceiving not of a new work, but a new medium entirely. They wrangled out of the ether a physical ground for our ideations, a way to put them to concrete test and make them available in that form to other people for consideration and elaboration. In my own life of studying philosophy, I discovered the discontent of thought form whose rubber never meets the road. In this vein, Mr. Brooks completes his thought above when he writes, “Yet the program construct, unlike the poet’s words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself.”



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis

Daily Tech Digest - June 10, 2022

Everything You Need to Know About Enterprise Architecture vs. Project Management

Even though both have their own set of specialized skills, they still correlate in certain areas. Sometimes different teams are working on various initiatives or parts of a landscape. In the middle of the project, they find out that each team needs to work on the same bit of the software or service ... However, to execute such a situation without any mishap needs some coordination and a good system in place to foresee these dependencies. Since it is hard to keep track of all the dependencies and some might come to bite you from the back later. This is where enterprise architecture is needed. Enterprise architects are usually well aware of these relationships and with their expertise in architecture models, they can uncover these dependencies better. Such dependencies are usually unknown to the project or program managers. Therefore, this is where enterprise architect vs. Project management correlates. Enterprise architecture is about managing the coherence of your business whereas project management is responsible for planning and managing usually from the financial and resource perspective.


A Minimum Viable Product Needs a Minimum Viable Architecture

In short, as the team learns more about what the product needs to be, they only build as much of the product and make as few architectural decisions as is absolutely essential to meet the needs they know about now; the product continues to be an MVP, and the architecture continues to be an MVA supporting the MVP. The reason for both of these actions is simple: teams can spend a lot of time and effort implementing features and QARs in products, only to find that customers don’t share their opinion on their value; beliefs in what is valuable are merely assumptions until they are validated by customers. This is where hypotheses and experiments are useful. In simplified terms, a hypothesis is a proposed explanation for some observation that has not yet been proven (or disproven). In the context of requirements, it is a belief that doing something will lead to something else, such as delivering feature X will lead to outcome Y. An experiment is a test that is designed to prove or reject some hypothesis.


In Search of Coding Quality

The major difference between good- and poor-quality coding is maintainability, states Kulbir Raina, Agile and DevOps leader at enterprise advisory firm Capgemini. Therefore, the best direct measurement indicator is operational expense (OPEX). “The lower the OPEX, the better the code,” he says. Other variables that can be used to differentiate code quality are scalability, readability, reusability, extensibility, refactorability, and simplicity. Code quality can also be effectively measured by identifying technical-debt (non-functional requirements) and defects (how well the code aligns to the laid specifications and functional requirements,” Raina says. “Software documentation and continuous testing provide other ways to continuously measure and improve the quality of code using faster feedback loops,” he adds. ... The impact development speed has on quality is a question that's been hotly debated for many years. “It really depends on the context in which your software is running,” Bruhmuller says. Bruhmuller says his organization constantly deploys to production, relying on testing and monitoring to ensure quality.


A chip that can classify nearly 2 billion images per second

While current, consumer-grade image classification technology on a digital chip can perform billions of computations per second, making it fast enough for most applications, more sophisticated image classification such as identifying moving objects, 3D object identification, or classification of microscopic cells in the body, are pushing the computational limits of even the most powerful technology. The current speed limit of these technologies is set by the clock-based schedule of computation steps in a computer processor, where computations occur one after another on a linear schedule. To address this limitation, Penn Engineers have created the first scalable chip that classifies and recognizes images almost instantaneously. Firooz Aflatouni, Associate Professor in Electrical and Systems Engineering, along with postdoctoral fellow Farshid Ashtiani and graduate student Alexander J. Geers, have removed the four main time-consuming culprits in the traditional computer chip: the conversion of optical to electrical signals, the need for converting the input data to binary format, a large memory module, and clock-based computations.


Scrum, Remote Teams, & Success: Five Ways to Have All Three

Agile teams have long made use of team agreements (or team working agreements). These set ground rules for the team, created by the team and enforced by the team. When our working environment shifts as much as it has recently, consider establishing some new team agreements specifically designed to address remote work. Examples? On-camera expectations, team core working hours (especially if you’re spread across multiple time zones) and setting aside focus time during which interruptions are kept to a minimum. ... One of the huge disadvantages of a remote team is the lack of personal connections that are made just grabbing a cup of coffee or standing around the water cooler. Remote teams need to be deliberate about counteracting isolation. Consider taking the first few minutes of a meeting to talk about anything non-work related. Set up a time for a team show-and-tell in which each team member can share something from their home or background in their home office that matters to them. Find excuses for the team to share anything that helps teammates get to know each other more—as human beings, not just co-workers. 


Cisco introduces innovations driving new security cloud strategy

Ushering in the next generation of zero trust, Cisco is building solutions that enable true continuous trusted access by constantly verifying user and device identity, device posture, vulnerabilities, and indicators of compromise. These intelligent checks take place in the background, leaving the user to work without security getting in the way. Cisco is introducing less intrusive methods for risk-based authentication, including the patent-pending Wi-Fi fingerprint as an effective location proxy without compromising user privacy. To evaluate risk after a user logs in, Cisco is building session trust analysis using the open Shared Signals and Events standards to share information between vendors. Cisco unveiled the first integration of this technology with a demo of Cisco Secure Access by Duo and Box. “The threat landscape today is evolving faster than ever before,” said Aaron Levie, CEO and Co-founder of Box. “We are excited to strengthen our relationship with Cisco and deliver customers with a powerful new tool that enables them to act on changes in risk dynamically and in near real-time.


10 key roles for AI success

The domain expert has in-depth knowledge of a particular industry or subject area. This person is an authority in their domain, can judge the quality of available data, and can communicate with the intended business users of an AI project to make sure it has real-world value. These subject matter experts are essential because the technical experts who develop AI systems rarely have expertise in the actual domain the system is being built to benefit, says Max Babych, CEO of software development company SpdLoad. ... When Babych’s company developed a computer-vision system to identify moving objects for autopilots as an alternative to LIDAR, they started the project without a domain expert. Although research proved the system worked, what his company didn’t know was that car brands prefer LIDAR over computer vision because of its proven reliability, and there was no chance they would buy a computer vision–based product. “The key advice I’d like to share is to think about the business model, then attract a domain expert to find out if it is a feasible way to make money in your industry — and only after that try to discuss more technical things,” he says.


Be Proactive! Shift Security Validation Left

When security testing only kicks in at the end of the SDLC, the delays caused in deployment due to uncovered critical security gaps cause rifts between DevOps and SOC teams. Security often gets pushed to the back of the line, and there's not much collaboration when introducing a new tool, or method, such as launching occasional simulated attacks against the CI/CD pipeline. Conversely, once a comprehensive continuous security validation approach is baked in the SDLC, daily invoking attack techniques emulations through the automation built-in XSPM technology identify misconfiguration early in the process, incentivizing close collaboration between DevSecOps and DevOps. With built-in inter-team collaboration across both security and software development lifecycle, working with immediate visibility on security implications, the goal alignment of both teams eliminates erstwhile strife and friction born of internal politics. Shifting extreme left with comprehensive continuous security validation enables you to begin mapping and to understand the investments made in various detection and response technologies and implementing findings to preempt attack techniques across the kill chain and protect real functional requirements.


Unlocking the ‘black box’ of education data

Technology enables education leaders to understand a child’s learning journey in a way that hasn’t been previously possible. Be this through logging the time a child spends on a certain task, recording areas that students consistently do well or poorly in, or by noting hours spent in extra-curricular programmes. Edtech allows the collection and centralisation of data on a child across their years spent in school. This data can then be used to build up a holistic picture of the student’s learning to share with everyone who supports that pupil, from teachers, parents and carers to learning support assistants. They are all able to contribute to the discussion on a pupils areas for focus and improvement. Artificial Intelligence (AI) data analytics can be a valued tool in allowing teachers to visualise and assess the most effective ways of learning in the classroom, the metacognition processes occurring, and intervene if needed to support learning. Beyond the classroom, education leaders and policy makers can aggregate data to develop strategies and policies. 


How to Retain Talent in Uncertain Circumstances

“There was confusion and uncertainty, which led to a willingness for those professionals in those organizations to listen to the opportunities we had,” Sasson says. “There was no visibility whatsoever, which created an environment where they were more open to hearing what else was out there.” In some cases a company may be planning downsizing after a merger, and they may be allowing that uncertainty to linger because they want some employees to voluntarily find new jobs, Sasson says. However, in other cases organizations may want to retain their valuable talent, particularly in this tight job market. Just because there’s a merger or acquisition doesn’t necessarily mean that everyone will make a stampede to the door. ... Sasson’s team asked the employees at Proofpoint why they weren’t interested in new opportunities. “From what we understand, the CEO at Proofpoint and the Thoma Bravo team -- they seemed to do an excellent job of communicating the value of the acquisition and limiting the jitters that would typically be felt by the rank and file,” Sasson said.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - June 06, 2022

How to Build a Data Science Enablement Team

Data scientists may use processes and tools you’re unfamiliar with, and those processes may not initially jibe with your own. For instance, data scientists may not think twice about emailing you code via Jupyter Notebooks. Or, they might use different versions of Python to create base images, with none in synchronization with each other. Consider offering alternatives to help them improve their workflows (and make your life a bit easier). For example, help them organize what they’re working on by setting up a Jupyter Hub instance or git repository. Making their jobs easier will help build the relationship. ... Most data scientists don’t want to become software developers any more than you probably want to become a data scientist. But bringing them into the DSET isn’t about getting them to learn more about software development — it’s about helping both you and them become more cognizant of the processes you both adhere to. So, while you’re empathizing with their work patterns, get them to understand how adopting some of your processes can help them in their daily workflows.


Feds Issue Alerts for Several Medical Device Security Flaws

The FDA in its alert for healthcare providers says the RUO devices are typically used in a development stage and are not for use in diagnostic procedures. But, it adds, many laboratories may be using the devices with tests for clinical diagnostic use. The vulnerabilities are exploitable remotely and have a low attack complexity, CISA says. The Illumina vulnerabilities involve path traversal, unrestricted upload of file with dangerous type, improper access control, and cleartext transmission of sensitive information. The vulnerabilities were scored as having CVSS v3 base scores of between 7.4 and 10.0. "Successful exploitation of these vulnerabilities may allow an unauthenticated malicious actor to take control of the affected product remotely and take any action at the operating system level," CISA warns. "An attacker could impact settings, configurations, software, or data on the affected product and interact through the affected product with the connected network." "Illumina has confirmed a security vulnerability affecting software in certain Illumina desktop sequencing instruments," the company says in a statement provided to Information Security Media Group. 


Crypto FUD: Quantum Computing Will Dwarf Blockchains’ Security

According to the research carried out by the team at Sussex, they concluded that only a supercomputer with a processing power of over 317 Quantum Bits could break down the SHA-256 algorithm in an hour or two. At the moment, the IBM supercomputer boasts around 127 qubits showing that it is still far behind the ‘possible’ processing power required to start causing damage to the Bitcoin algorithms. For Bitcoin’s blockchain to be broken, the supercomputer would need to perform a 50+1 attack involving taking over the blocks’ mining process. Bitcoin mining is done using special hardware called the Application Specific Integrated Circuits (ASICs), specifically made for the mining rigs. The circuits use a programming method/ hash function known as “puzzle friendliness,” where every input is expected to provide a good output, and if it doesn’t, then it is detected by the whole system, and the miner gets notified. That means the operation of the ASICs cannot begin to be tampered with by any computer without all miners working on the same block being notified concurrently. 


8 ways level of detail could improve digital twins

The architectural, engineering, and construction industry uses a related concept called Level of Development in Building Information Modeling (BIM) to characterize changes in technical design depth across a project’s development process. It describes the level to which planning teams have fleshed out the specifications, geometry and attached information. In the early stages, planning groups may just want to quickly estimate the overall cost and complexity of a project before proceeding. Later, domain experts such as electricians, plumbers and structural engineers can plan out exact gauges of wire and pipe in richer depth. These later levels of development can help plan orders and schedule the construction sequence so that teams do not interfere with each other. ... In good experience design, it is often helpful to guide a user’s attention to a particular detail. For example, it might be more beneficial to highlight the exact screws a repair technician needs to remove rather than render a scene in complete detail using an augmented reality overlay. Researchers believe that using LOD for glanceable interfaces could clarify complicated repairs and procedures. In musical concerts, visual augmentation with LOD could enhance the audience experience.


Considering digital trust: why zero trust needs a rethink

Knowing that digital trust is now critical for all businesses and organisations today; why has zero trust gained so much attention? Well, simply put, we can’t assume that we should trust everything, take a zero trust approach, then establish and maintain trust. From a security leader and CISO perspective, that means that we need to establish and maintain trust with all entities that make up and interact with the business. As such, digital trust here is the trust in machines, software, devices, and humans interacting with digital services that now power our world. It should not be confused with zero trust, which is often misinterpreted. The ‘zero’ implies no trust at all exists. Trust is dynamic, and it needs to be constantly upheld. The way enterprises approach establishing digital trust is important to ensure the functioning of the business, but specifically the security of both human and machine identities. While many organisations focused on zero trust initiatives over the past few years, many recognised that trust in humans and machines is the foundational layer. In the modern enterprise, security leaders must design solid identity-first security frameworks deeply rooted in cryptography for digital trust to be established.


Connected Healthcare Takes Huge Leap Forward

Business and IT leaders who ignore connected healthcare do so at their own peril. A study from Doctor.com found that 83% of patients using telemedicine plan to continue with it after the pandemic. In addition, 68% prefer to use their mobile phone to make appointments and handle other tasks, and 91% say that connected tech is valuable for managing prescriptions and compliance. At some point -- and there’s some indication that it’s already happening -- consumer companies like Apple, Withings, ÅŒura and Fitbit will steal away opportunities for new products and services. Already, drug store chains and smaller and more disruptive companies are establishing footholds, and new and innovative healthcare products are appearing. “There are growing opportunities for data and app-related services, apps, subscriptions and more but traditional healthcare providers often don’t see this,” Schooley points out. Establishing an IT foundation to support connected health is vital. Hall says this includes a cloud-first architecture, integrating IoT and edge technologies, focusing on data standards, building more sophisticated and interactive apps, exploring partnerships, and cultivating skillsets needed to support both innovation and operations.


The costs and damages of DNS attacks

A DNS attack does not just result in an inconvenient business disruption but can be a costly expense for organizations. In the past 12 months, APAC has become the region with the highest average cost of a successful attack at $1,036,040, an increase of 14% when compared to 2021, while EMEA and North America’s average cost of successful attack has decreased by 4% and 7% respectively. Malaysia (21%), Germany (18%) and both India and the UK (14% each) experienced the highest increase in the cost of an attack, while Spain saw its cost of damages plummet by almost half (48%) when compared to 2021. France and the US were the only other countries that saw a decline in the average cost with 21% and 5% respectively. Cybercriminals are continuing to use all available tools to gain access to networks, disrupt the business and steal data by specifically targeting the hybrid workforce, with DNS-based attacks becoming increasingly pervasive across all industries. In the last year, 70% of organizations suffered with in-house and cloud application downtime, with the average time to mitigate these threats increasing to 6 hours and 7 minutes, meaning that employees, partners, and customers were unable to access any services.


Government Agencies Seize Domains Used to Sell Credentials

"The actions executed by our international partners included the arrest of a main subject, searches of several locations, and seizures of the web server's infrastructure," according to the DOJ. In December 2020, Britain's National Crime Agency reported arrests of 21 individuals on suspicion of purchasing personally identifiable information from the WeLeakInfo website for a variety of purposes, including the buying and selling of malicious cyber tools such as remote access Trojans, aka RATs, as well as to buy "cryptors," which can be used to obfuscate code in malware, according to the NCA. It has said that all are men, ranging in age from 18 to 38 and the arrests took place over a five-week period starting in November 2020. Beyond the 21 people arrested by police, another 69 individuals in England, Wales and Northern Ireland have received warnings from the NCA or other domestic law enforcement agencies, saying they may have engaged in criminal activity tied to the investigation. Sixty of those individuals also received cease-and-desist orders from police.


The Value of Data Mobility for Modern Enterprises

Despite all the excitement about data analytics, it’s not a silver bullet. Turning data into real business value isn’t simply a matter of deploying all the right tools. To be sure, it requires some smart investment in good technology, but ultimately, it’s got to be about identifying high-value business cases and making sure that your business users have what they need to deliver positive outcomes. Business success is virtually always about compromise. For years, CTOs have grappled with the pros and cons of unified systems versus best-of-breed environments. They have weighed the advantages of diverse, purpose-built systems against the inherent value of a large-scale monolithic platform that offers a holistic approach to the business. In the end, best-of-breed won that battle. As a result, the problem of data silos became more pronounced. The hunger for real-time analytics has rendered the pain caused by data silos far more palpable. But there is good news; if we make the data from all those different systems available in a single place, we can have the best of both worlds.


Digital transformation: How to gain organizational buy-in

Data analytics does not always require data scientists. CIOs and IT leaders often reach a turning point when they discover that most employees can be trained to become resident data analytics subject experts. When employees combine new knowledge of data analysis with their existing knowledge of the processes or machines, they can quickly be at the forefront of a digital journey. This is welcome news to most IT leaders, simply because the demand for skillsets in data science and cybersecurity has skyrocketed. Upskilling existing team members can be critical in attaining sustained adoption and continuous improvements of digital solutions. This includes long-term improvements in employee engagement and retention, increased cross-functional collaboration, and adoption of modern technology trends. Along with their technical skills, employees need to be skilled at diagnostics and problem-solving using the data now readily available to them. Employees who may have previously been data-gatherers can shift to become problem-solvers based on new data-driven insights. Make sure your employees are ready to learn and grow to take advantage of these opportunities.



Quote for the day:

"The essence of leadership is the willingness to make the tough decisions. Prepared to be lonely." -- Colin Powell

Daily Tech Digest - June 05, 2022

How the Web3 stack will automate the enterprise

Web3 is only partially in existence within enterprises but is already making an incredible impact and altering strategies. Cross River Bank, which just raised $620 million at a $3 billion valuation, powers embedded payments, cards, lending, and crypto solutions for over 80 leading technology partners. Cross River CEO Giles Gade’s plan is to start offering more crypto-related products and services, gearing towards a crypto-first strategy. Investors are excited by the opportunity. “As Web3 continues to gain mindshare of consumers and businesses alike, we believe Cross River sits in a unique position to serve as the infrastructure and interconnective tissue between the traditional and regulated centralized financial system, as it transitions slowly to a decentralized one,” said Lior Prosor, General Partner and Co-founder of Hanaco Ventures in the Cross River press release. In many ways, this time is no different than when financial institutions and VCs saw the disruptive potential by investing in FinTech innovation – analog to digital – years prior. If FinTech is the blending of technology and finance, Web3 is the merging of crypto with the web.

Demystifying the Metrics Store and Semantic Layer

First, many critical data assets end up isolated on local servers, data centers and cloud services. Unifying them poses a significant challenge. Often, there are also no standardized data and business definitions, and this adds to the difficulty for businesses to tap into the full value of their data. As companies embark on new data management projects, they need to address these concerns; however, many have chosen to avoid this issue for one reason or another. This results in new data silos across the business. Second, as every data warehouse practitioner is aware, it’s difficult for most business users to interpret the data in the warehouse. Because technical metadata like table names, column names and data types are typically worthless to business users, data warehouses aren’t enough when it comes to allowing users to conduct analysis on their own. From a business user’s perspective, what can be done to solve this problem? Two popular solutions are metrics stores and semantic layers, but which is the best approach? And what’s the difference between them?


Why HR plays an important role in preventing cyber attacks

HR staff members often work with legal counsel on security policies, including the creation, maintenance and enforcement of acceptable usage policies. Since HR staff communicates frequently with employees, they are well positioned to share information about security and privacy expectations and often already work to keep security topics top-of-mind for employees. ... As with security policy work, HR professionals are often a valuable part of compliance-related initiatives because certain aspects of state, federal and international privacy and security compliance regulations require HR expertise. This is particularly true for larger organizations that have office locations or employees in multiple countries. HR may work on the creation of processes including user onboarding and offboarding, security awareness and training, and the steps for incident response once a crisis occurs. ... Some HR professionals already serve on their IT and security governance committee, as it's only natural that HR should help get the word out on security and assist with policy creation and administration when needed.


7 Reasons Why Serverless Encourages Useful Engineering Practices

They are easier to change. After reading the book “The Pragmatic Programmer”, I realized that making your software easy to change is THE de-facto principle to live by as an IT professional. For instance, when you leverage functional programming with pure (ideally idempotent) functions, you always know what to expect as input and output. Thus, modifying your code is simple. If written properly, serverless functions encourage code that is easy to change and stateless. They are easier to deploy — if the changes you made to an individual service don’t affect other components, redeploying a single function or container should not disrupt other parts of your architecture. This is one reason why many decide to split their Git repositories from a “monorepo” to one repository per service. With serverless, you are literally forced to make your components small. For instance, you cannot run any long-running processes with AWS Lambda (at least for now). At the time of writing, the maximum timeout configuration doesn’t allow for any process that takes longer than 15 minutes. 



WTF is a Service Mesh?

The internal workings of a Service Mesh are conceptually fairly simple: every microservice is accompanied by its own local HTTP proxy. These proxies perform all the advanced functions that define a Service Mesh (think about the kind of features offered by a reverse proxy or API Gateway). However, with a Service Mesh this is distributed between the microservices—in their individual proxies—rather than being centralised. In a Kubernetes environment these proxies can be automatically injected into Pods, and can transparently intercept all of the microservices’ traffic; no changes to the applications or their Deployment YAMLs (in the Kubernetes sense of the term) are needed. These proxies, running alongside the application code, are called sidecars. These proxies form the data plane of the Service Mesh, the layer through which the data—the HTTP requests and responses—flow. This is only half of the puzzle though: for these proxies to do what we want they all need complex and individual configuration. Hence a Service Mesh has a second part, a control plane.


Best Practices for Deploying Language Models

We’re recommending several key principles to help providers of large language models (LLMs) mitigate the risks of this technology in order to achieve its full promise to augment human capabilities. While these principles were developed specifically based on our experience with providing LLMs through an API, we hope they will be useful regardless of release strategy (such as open-sourcing or use within a company). We expect these recommendations to change significantly over time because the commercial uses of LLMs and accompanying safety considerations are new and evolving. We are actively learning about and addressing LLM limitations and avenues for misuse, and will update these principles and practices in collaboration with the broader community over time. We’re sharing these principles in hopes that other LLM providers may learn from and adopt them, and to advance public discussion on LLM development and deployment.


A cybersecurity expert explains why it would be so hard to obscure phone data in a post-Roe world

There’s not a whole lot users can do to protect themselves. Communications metadata and device telemetry – information from the phone sensors – are used to send, deliver and display content. Not including them is usually not possible. And unlike the search terms or map locations you consciously provide, metadata and telemetry are sent without you even seeing it. Providing consent isn’t plausible. There’s too much of this data, and it’s too complicated to decide each case. Each application you use – video, chat, web surfing, email – uses metadata and telemetry differently. Providing truly informed consent that you know what information you’re providing and for what use is effectively impossible. If you use your mobile phone for anything other than a paperweight, your visit to the cannabis dispensary and your personality – how extroverted you are or whether you’re likely to be on the outs with family since the 2016 election – can be learned from metadata and telemetry and shared.


Three Architectures That Could Power The Robotic Age With Autonomous Machine Computing

Similar to other information technology stacks, the autonomous machine computing technology stack consists of hardware, systems software and application software. Sitting in the middle of this technology stack is computer architecture, which defines the core abstraction between hardware and software. The existence of this abstraction layer allows software developers to focus on optimizing the software to fully utilize the underlying hardware to develop better applications as well as to achieve higher performance and higher energy efficiency. This abstraction layer also allows hardware developers to focus on developing faster, more affordable, more energy-efficient hardware that can unlock the imagination of software developers. ... Hence, computer architecture is essential to information technology. For instance, in the personal computing era, x86 has become the dominant computer architecture due to its superior performance. In the mobile computing era, ARM has become the dominant computer architecture due to its superior energy efficiency. 


Datadog finds serverless computing is going mainstream

Serverless represents the ideal state of cloud computing, where you only use exactly what resources you need and no more. That’s because the cloud provider delivers only those resources when a specific event happens and shuts it down when the event is over. It’s not a lack of servers, so much as not having to deploy the servers because the provider handles that for you in an automated fashion. When people began talking about cloud computing around 2008, one of the advantages was elastic computing, or only using what you need, scaling up or down as necessary. In reality, developers don’t know what they’ll need, so they’ll often overprovision to make sure the application stays up and running. The company created the report based on data running through its monitoring service. While it represents only the activity from its customers, Rabinovitch sees it as quality data given the broad range of customers it has using its services. “We do think we’re well represented across the industry, and we believe that we’re representative of real production workloads,” he said.


How Platform Engineering Helps Manage Innovation Responsibly

Platform engineering, then, is a support function. If it enables, it does so by reducing complexity and making it easier for developers and other technical teams to achieve their objectives. Moreover, one of the advantages of having a platform engineering team is that it can balance competing needs and aims — like, for example, developer experience and security — in a way that ensures engineering capabilities and commercial imperatives are properly aligned. Calling it a “support function” might not sound particularly sexy, but it nevertheless suggests that organizations are maturing in their approach to software development. It’s no longer the locus of moving fast and breaking things, but instead recognized as something that requires care and stewardship. But this implies responsibility — and that, to invert the old adage, carries considerable power. This means that platform engineering can become a political beast within organizations. If it can shape the way developers work, it can inevitably play a part in the direction of a whole technology strategy.



Quote for the day:

"Leadership is developed daily, not in a day." -- John C. Maxwell

Daily Tech Digest - June 02, 2022

A decentralized verification system could be the key to boosting digital security

Instead of placing trust in a single central entity, decentralization places trust in the network as a whole, and this network can exist outside of the IAM system using it. The mathematical structure of the algorithms underpinning the decentralized authority ensures that no single node can act alone. Moreover, each node on the network can be operated by an independently operating organization, such as a bank, telecommunication company, or government departments. So, stealing a single secret would require hacking several independent nodes. Even in the event of an IAM system breach, the attacker would only gain access to some user data – not the entire system. And to award themselves authority over the entire organization, they would need to breach a combination of 14 independently operating nodes. This isn’t impossible, but it’s a lot harder. But beautiful mathematics and verified algorithms still aren’t enough to make a usable system. There’s more work to be done before we can take decentralized authority from a concept to a functioning network that will keep our accounts safe.


Emerging digital twins standards promote interoperability

Digital twins today are mostly application-driven. “But what we really need is the interoperable digital twin so we can realize the interoperability between these different digital twins,” said Christian Mosch, general manager at IDTA. The IDTA Asset Administration Shell standard provides a framework for sharing data across the different lifecycle phases such as planning, development, construction, commissioning, operation and recycling at the end of life. It provides a way of thinking about assets such as a robot arm and the administration of the different data and documents that describe it across various lifecycle phases. The shell provides a container for consistently storing different types of information and documentation. For example, the robot arm might include engineering data such as 3D geometry drawings, design properties and simulation results. It may also include documentation such as declarations of conformity and proof certifications. The Asset Administration Shell also brings data from operations technology used to manage equipment on the shop floor into the IT realm to represent data across the lifecycle. 


4 Database Access Control Methods to Automate

The beauty of using security automation as a data broker is that it has the ability to validate data-retrieval requests. This includes verifying that the requestor actually has permission to see the data being requested. If the proper permissions aren’t in place, the user can submit a request to be added to a specific role through the normal request channels, which is typically the way to go. With automated data access control, this request could be generated and sent within the solution to streamline the process. This also allows additional context-specific information to be included in the data-access request automatically. For example, if someone requests data that they do not have access to within their role, the solution can be configured to look up the database owner, populate an access request and send it to the owner of the data, who can then approve one-time access or grant access for a certain period of time. A common scenario where this is useful is when an employee goes on vacation and someone new is helping with their clients’ needs while they are out.


AI still needs humans to stay intelligent—here’s why

Remember, AI models are usually programmes or algorithms built to use data to recognise patterns, and either reach a conclusion or make a prediction. Once designed, paid for, and implemented, it’s easy to assume that these models will stay smart forever. Instead, they nearly always require regular human intervention. Why? Let’s look at a few examples: It’s likely that the technology your organisation uses in day-to-day operations is regularly changed and upgraded; Your company might have uncovered new intelligence about your customers, such as levels of interaction with a recently launched product; Your business’ strategies may change – for example, you might switch focus from reducing production costs to investing in a quality customer experience.  ... Where possible, avoid ‘technical debt’ by focusing on gradual AI improvements, rather than waiting for an issue to flare up and then facing a gruelling system overhaul. And finally, strive to create an AI-aware culture in your workplace. Educate your employees on how your AI systems work, why they’re reliable, why they’re to be trusted rather than feared – and that they’re not a replacement for their jobs.


Massive shadow code risk for world’s largest businesses

“While retail and credit card breaches grab the most headlines, this is a pervasive and relatively unchecked risk to both security and privacy across all verticals,” said Dan Dinnar, CEO of Source Defense. “It’s also a fast-growing and extremely volatile issue with regard to sensitive data. Organizations and their digital supply chain partners are constantly updating sites and code, and the data of greatest value to malicious actors is collected on the pages where the business has the greatest need for analytics, tag management, and other tracking and management capabilities.” Extensive libraries of third-party scripts are available free, or at low cost, from a range of communities, organizations, and even individuals, and are extremely popular as they allow development teams to quickly add advanced functionality to applications without the burden of creating and maintaining them. These packages also often contain code from additional parties further removed from – and farther out of the purview of – the deploying organization.


High-tech legislation through self-regulation

In industries where no direct legislation exists, judges have to rely on a multitude of secondary factors, putting additional strain on them. In some cases, they might be left only with the general principles of law. In web scraping, data protection laws, e.g. GDPR, became the go-to area for related cases. Many of them have been decided on the basis of these regulations and rightfully so. But scraping is much more than just data protection. Case law, mostly from the US, has in turn been used as one of the fundamental parts that have directed the way for our current understanding of the legal intricacies of web scraping. Although, regretfully, that direction isn’t set in stone. Yet, using such indirect laws and practices to regulate an industry, even with the best intentions, can lead to unsatisfying outcomes. A majority of the publicly accessible data is being held by specific companies, particularly social media websites. Social media companies and other data giants will do everything in their power to protect the data they hold. Unfortunately, they might sometimes go too far when protecting personal data.


Why AI Ethics Is Even More Important Now

AI ethics stems from a company's values. Those values should be reflected in the company's culture as well as how the company utilizes AI. One cannot assume that technologists can just build or implement something on their own that will necessarily result in the desired outcome(s). "You cannot create a technological solution that will prevent unethical use and only enable the ethical use," said Forrester's Carlsson. "What you need actually is leadership. You need people to be making those calls about what the organization will and won't be doing and be willing to stand behind those, and adjust those as information comes in." Translating values into AI implementations that align with those values requires an understanding of AI, the use cases, who or what could potentially benefit and who or what could be potentially harmed. "Most of the unethical use that I encounter is done unintentionally," said Forrester's Carlsson. " Of the use cases where it wasn't done unintentionally, usually they knew they were doing something ethically dubious and they chose to overlook it." Part of the problem is that risk management professionals and technology professionals are not yet working together enough.


Digital transformation: 5 ways to create a realistic strategy

Understand that digital transformation doesn’t just happen in the IT department; it happens in the C-suite, in cubicles, and in home offices. That means all stakeholders need to be aligned and in agreement with your company’s digital transformation goal. The directive must come from management, but the work will happen throughout the company, often precipitating a major cultural shift toward new technologies and processes. In such cases, training and change management might be necessary to make users feel more comfortable with the new tools and processes. Leaders need to ensure that their teams are on board with the direction the company is moving in, and they should be willing to listen to feedback as the organization continues along its journey. What that plan looks like is up to you. Digital transformation is different for everyone, and every company has its own objectives. Meeting those objectives can be daunting. But by setting a goal, performing an assessment, breaking your plan into manageable pieces, budgeting realistically, and getting everyone to buy in, you will succeed.


Three ways to prevent hybrid work from breaking your company culture

Companies need to take a hard look at the current environment and gauge how effectively it supports different types of work. Many aspects of office design are based on convention rather than deliberate thought. One analysis found that building thermostats typically have been calibrated for the comfort of men who are 40 years old and weigh approximately 154 pounds, which is cooler than is comfortable for most women. That norm was established decades ago and never updated. Just about every physical feature of the office can be made more conducive to hybrid work. Technology such as an online whiteboard for meetings, smart cameras that automatically pan to people as they talk, and virtual receptionists help to bridge the gap between virtual and in-office workforces. ... Last, leaders must set employees up for success. These support mechanisms can be quite diverse. The insurance company mentioned above, for instance, created training programs to give its employees the right skills to succeed in a hybrid workplace. These included tactical help on new technology, along with training for managers on effective virtual coaching conversations.


Why the Dual Operating Model Impedes Enterprise Agility

In the traditional organization, waiting for things (or queueing) is the norm: waiting for people to respond to emails, waiting days or weeks for a meeting because that’s the first open time on everyone’s calendar, or waiting for someone else to finish their part of a project so you can start yours. But waiting is death for agile teams; it wastes valuable time and diverts their focus. And when I say "death", I am not exaggerating for effect. Waiting makes agile teams ineffective, and over time it will kill the agile team’s ability to get things done. If an agile team has to wait every time it needs something from the rest of the organization, pretty soon it will act just like any other team. This is one reason why agile teams only seem to work on new initiatives that are completely disconnected from the existing organization: so long as they don’t have to interact with the rest of the organization, so long as they are completely self-contained, they don’t waste time waiting and they can work in an agile way. But once they need expertise or authority they don’t have, it all starts to fall apart.



Quote for the day:

"Being defeated is often a temporary condition. Giving up is what makes it permanent." -- Marilyn Vos Savant