Daily Tech Digest - July 28, 2021

DevOps Is Dead, Long Live AppOps

The NoOps trend aims to remove all the frictions between development and the operation simply removing it, as the name tells. This may seem a drastic solution, but we do not have to take it literally. The right interpretation — the feasible one — is to remove as much as possible the human component in the deployment and delivery phases. That approach is naturally supported by the cloud that helps things to work by themself. ... One of the most evident scenarios that explain the benefit of AppOps is every application based on Kubernetes. If you will open each cluster you will find a lot of pod/service/deployment settings that are mostly the same. In fact, every PHP application has the same configuration, except for parameters. Same for Java, .Net, or other applications. The matter is that Kubernetes is agnostic to the content of the host's applications, so he needs to inform it about every detail. We have to start from the beginning for all new applications even if the technology is the same. Why? I should explain only once how a PHP application is composed. 


Thrill-K: A Blueprint for The Next Generation of Machine Intelligence

Living organisms and computer systems alike must have instantaneous knowledge to allow for rapid response to external events. This knowledge represents a direct input-to-output function that reacts to events or sequences within a well-mastered domain. In addition, humans and advanced intelligent machines accrue and utilize broader knowledge with some additional processing. I refer to this second level as standby knowledge. Actions or outcomes based on this standby knowledge require processing and internal resolution, which makes it slower than instantaneous knowledge. However, it will be applicable to a wider range of situations. Humans and intelligent machines need to interact with vast amounts of world knowledge so that they can retrieve the information required to solve new tasks or increase standby knowledge. Whatever the scope of knowledge is within the human brain or the boundaries of an AI system, there is substantially more information outside or recently relevant that warrants retrieval. We refer to this third level as retrieved external knowledge.


GitHub’s Journey From Monolith to Microservices

Good architecture starts with modularity. The first step towards breaking up a monolith is to think about the separation of code and data based on feature functionalities. This can be done within the monolith before physically separating them in a microservices environment. It is generally a good architectural practice to make the code base more manageable. Start with the data and pay close attention to how they’re being accessed. Make sure each service owns and controls access to its own data, and that data access only happens through clearly defined API contracts. I’ve seen a lot of cases where people start by pulling out the code logic but still rely on calls into a shared database inside the monolith. This often leads to a distributed monolith scenario where it ends up being the worst of both worlds - having to manage the complexities of microservices without any of the benefits. Benefits such as being able to quickly and independently deploy a subset of features into production. Getting data separation right is a cornerstone in migrating from a monolithic architecture to microservices. 


Data Strategy vs. Data Architecture

By being abstracted from the problem solving and planning process, enterprise architects became unresponsive, he said, and “buried in the catacombs” of IT. Data Architecture needs to look at finding and putting the right mechanisms in place to support business outcomes, which could be everything from data systems and data warehouses to visualization tools. Data architects who see themselves as empowered to facilitate the practical implementation of the Business Strategy by offering whatever tools are needed will make decisions that create data value. “So now you see the data architect holding the keys to a lot of what’s happening in our organizations, because all roads lead through data.” Algmin thinks of data as energy, because stored data by itself can’t accomplish anything, and like energy, it comes with significant risks. “Data only has value when you put it to use, and if you put it to use inappropriately, you can create a huge mess,” such as a privacy breach. Like energy, it’s important to focus on how data is being used and have the right controls in place. 


Why CISA’s China Cyberattack Playbook Is Worthy of Your Attention

In the new advisory, CISA warns that the attacks will also compromise email and social media accounts to conduct social engineering attacks. A person is much more likely to click on an email and download software if it comes from a trusted source. If the attacker has access to an employee's mailbox and can read previous messages, they can tailor their phishing email to be particularly appealing – and even make it look like a response to a previous message. Unlike “private sector” criminals, state-sponsored actors are more willing to use convoluted paths to get to their final targets, said Patricia Muoio, former chief of the NSA’s Trusted System Research Group, who is now general partner at SineWave Ventures. ... Private cybercriminals look for financial gain. They steal credit card information and health care data to sell on the black market, hijack machines to mine cryptocurrencies, and deploy ransomware. State-sponsored attackers are after different things. If they plan to use your company as an attack vector to go after another target, they'll want to compromise user accounts to get at their communications. 


Breaking through data-architecture gridlock to scale AI

Organizations commonly view data-architecture transformations as “waterfall” projects. They map out every distinct phase—from building a data lake and data pipelines up to implementing data-consumption tools—and then tackle each only after completing the previous ones. In fact, in our latest global survey on data transformation, we found that nearly three-quarters of global banks are knee-deep in such an approach.However, organizations can realize results faster by taking a use-case approach. Here, leaders build and deploy a minimum viable product that delivers the specific data components required for each desired use case (Exhibit 2). They then make adjustments as needed based on user feedback. ... Legitimate business concerns over the impact any changes might have on traditional workloads can slow modernization efforts to a crawl. Companies often spend significant time comparing the risks, trade-offs, and business outputs of new and legacy technologies to prove out the new technology. However, we find that legacy solutions cannot match the business performance, cost savings, or reduced risks of modern technology, such as data lakes. 


Data-Intensive Applications Need Modern Data Infrastructure

Modern applications are data-intensive because they make use of a breadth of data in more intricate ways than anything we have seen before. They combine data about you, about your environment, about your usage and use that to predict what you need to know. They can even take action on your behalf. This is made possible because of the data made available to the app and data infrastructure that can process the data fast enough to make use of it. Analytics that used to be done in separate applications (like Excel or Tableau) are getting embedded into the application itself. This means less work for the user to discover the key insight or no work as the insight is identified by the application and simply presented to the user. This makes it easier for the user to act on the data as they go about accomplishing their tasks. To deliver this kind of application, you might think you need an array of specialized data storage systems, ones that specialize in different kinds of data. But data infrastructure sprawl brings with it a host of problems.  


The Future of Microservices? More Abstractions

A couple of other initiatives regarding Kubernetes are worth tracking. Jointly created by Microsoft and Alibaba Cloud, the Open Application Model (OAM) is a specification for describing applications that separate the application definition from the operational details of the cluster. It thereby enables application developers to focus on the key elements of their application rather than the operational details of where it deploys. Crossplane is the Kubernetes-specific implementation of the OAM. It can be used by organizations to build and operate an internal platform-as-a-service (PaaS) across a variety of infrastructures and cloud vendors, making it particularly useful in multicloud environments, such as those increasingly commonly found in large enterprises through mergers and acquisitions. Whilst OAM seeks to separate out the responsibility for deployment details from writing service code, service meshes aim to shift the responsibility for interservice communication away from individual developers via a dedicated infrastructure layer that focuses on managing the communication between services using a proxy. 


Navigating data sovereignty through complexity

Data sovereignty is the concept that data is subject to the laws of the country which it is processed in. In a world where there is a rapid adoption of SaaS, cloud and hosted services, it becomes obvious to see the issues that data sovereignty can have. In simpler times, data wasn’t something businesses needed to be concerned about and could be shared and transferred freely with no consequence. Businesses that also had a digital presence operated on a small scale and with low data demands hosted on on-premise infrastructure. This meant that data could be monitored and kept secure, much different from the more distributed and hybrid systems that many businesses use today. With so much data sharing and lack of regulation, it all came crashing down with the Cambridge Analytica scandal in 2016, promoting strict laws on privacy. ... When dealing with on-premise infrastructure, governance is clearer, as it must follow the rules of the country it’s in. However, when it’s in the cloud, a business can store its data in any number of locations regardless of where the business itself is.


How security leaders can build emotionally intelligent cybersecurity teams

EQ is important, as it has been found by Goleman and Cary Cherniss to positively influence team performance and to cultivate positive social exchanges and social support among team members. However, rather than focusing on cultivating EQ, cybersecurity leaders such as CISOs and CIOs are often preoccupied by day-to-day operations (e.g., dealing with the latest breaches, the latest threats, board meetings, team meetings and so on). In doing so, they risk overlooking the importance of the development and strengthening of their own emotional intelligence (EQ) and that of the individuals within their teams. As well as EQ considerations, cybersecurity leaders must also be conscious of the team’s makeup in terms of gender, age and cultural attributes and values. This is very relevant to cybersecurity teams as they are often hugely diverse. Such values and attributes will likely introduce a diverse set of beliefs defined by how and where an individual grew up and the values of their parents. 



Quote for the day:

"The mediocre leader tells The good leader explains The superior leader demonstrates The great leader inspires." -- Buchholz and Roth

Daily Tech Digest - July 27, 2021

How AI Algorithms Are Changing Trading Forever

The Aite Group in its report "Hedge Fund Survey, 2020: Algorithmic Trading" argues that the main reason for the growing popularity of algorithms in trading is to try to reduce the influence of the human factor on the market due to its high volatility. The economic fallout from COVID-19 has seen a record-breaking drop in the American, European, and Chinese stock markets. And only a few months later, measures to stimulate the economy were able to stop the fall and reverse the downtrend up. Thus, we get the first task of Algorithmic Trading - risk reduction in a market with high volatility. The second global advantage of algorithmic trading lies in the ability to analyze the potential impact of trade on the market. This can be especially useful for Hedge Funds and institutional investors who handle large sums of money with a visible effect on price movements. The third fundamental advantage of trading algorithms is protection from emotions. Traders and investors, like all living people, experience the emotions of fear, greed, lost profits, and others. These emotions have a negative impact on performance and results.


How to prevent corporate credentials ending up on the dark web

Employees are the weakest link in any organization’s security posture. A Tessian report found that 43% of US and UK employees have made mistakes that resulted in cybersecurity repercussions for their organizations. Phishing scams, including emails that try to trick employees into sharing corporate login details, are particularly common. Educating employees on cyber threats and how to spot them is crucial to mitigating attacks. However, for training to be effective, it needs to consist of more than just repetitive lectures. In the report mentioned above, 43% of respondents said a legitimate-looking email was the reason they fell for a phishing scam, while 41% of employees said they were fooled because the email looked like it came from higher up. Live-fire security drills can help employees familiarize themselves with real-world phishing attacks and other password hacks. Safety awareness training should also teach workers the importance of good practices like using a virtual private network (VPN) when working from home and making social media accounts private.


IT leadership: 4 ways to find opportunities for improvement

Technology leaders should regularly use their own technology to better identify pain points and opportunities for improvements. That means that I should be teaching and using the same systems that faculty does to understand their experience through their lens. I should be meeting regularly with them and generating a Letterman-style Top 10 list of the things I hate most about my technology experience. This is something to do with the students, too. What do they hate most about the technology at the university? And how can we partner with them to address these issues over the next 12 months? Several years ago, for example, we reexamined our application process. If a prospective student wanted to submit an application, we required them to generate a unique username and password. If the one they chose was already taken, they needed to continue creating alternate versions until they eventually landed upon one that was available. If someone began the application process and logged off to complete it later, then forgot their username and password, they’d have to start all over again. It was absurd.


Data Management In The Age Of AI

AI is increasingly converging the traditional high-performance computing and high-performance data analytics pipelines, resulting in multi-workload convergence. Data analytics, training and inference are now being run on the same accelerated computing platform. Increasingly, the accelerated compute layer isn’t limited to GPUs⁠—it now involves FPGAs, graph processors and specialized accelerators. Use cases are moving from computer vision to multi-modal and conversational AI, and recommendation engines are using deep learning while low-latency inference is used for personalization on LinkedIn, translation on Google and video on YouTube. Convolutional neural networks (CNN) are being used for annotation and labeling to transfer learning. And learning is moving to federated learning and active learning, while deep neural networks (DNN) are becoming even more complex with billions of hyper-parameters. The result of these transitions is different stages within the AI data pipelines, each with distinct storage and I/O requirements.


SASE: Building a Migration Strategy

Gartner's analysts say that "work from anywhere" and cloud-based computing have accelerated cloud-delivered SASE offerings to enable anywhere, anytime secure access from any device. Security and risk management leaders should build a migration plan from the legacy perimeter and hardware-based offerings to a SASE model. One hindrance to SASE adoption, some security experts tell me, is that organizations lack visibility into sensitive data and awareness of threats. Too many enterprises have separate security and networking teams that don't share information and lack an all-encompassing security strategy, they say. "While the vendors are touting SASE as the end-all solution, the key to success would depend upon how well we define the SASE operating model, particularly when there are so many vendors coming up with SASE-based solutions," says Bengaluru-based Sridhar Sidhu, senior vice president and head of the information security services group at Wells Fargo. Yask Sharma, CISO of Indian Oil Corp., says that as data centers move to the cloud, companies need to use SASE to enhance security while controlling costs.


How to Bridge the Gap between Netops and Secops

If you were designing the managerial structure for a software development firm from scratch today, it’s very unlikely that you would separate NetOps and SecOps in the first place. Seen from the perspective of 2021, many of the monitoring and visibility tools that both teams seek and use seem inherently similar. Unfortunately, however, the historical development of many firms has not been that simple. Teams and remits are not designed from the ground up with rationality in mind – instead they emerge from a complex series of interactions and ever-changing priorities. This means that different teams often end up with their own priorities, and can come to believe that they are more important than those of other parts of your organization. This is seen very clearly in the distinction between SecOps and NetOps teams in many firms. At the executive level, your network exists in order to facilitate connections – between systems and applications but above all between staff members. Yet for many NetOps teams, the network can come to be seen as an end in itself.


The future of data science and risk management

“Enterprise data is growing nearly exponentially, and it is also increasing in complexity in terms of data types,” said Morgan. “We have gone way past the time when humans could sift through this amount of data in order to see large-scale trends and derive actionable insights. The platforms and best practices of data science and data analytics incorporate technologies which automate the analytics workflows to a large extent, making dataset size and complexity much easier to tackle with far less effort than in years past. “The second value-add is to leverage machine learning, and ultimately artificial intelligence, to go beyond historical and near-real-time trend analysis and ‘look into the future’, so to speak. Predictive analysis can unveil new customer needs for products and services and then forecast consumer reactions to resultant offers. Equally, predictive analytics can help uncover latent anomalies that lead to much better predictions about fraud detection and potentially risky behaviour. “Nothing can foretell the future with 100% certainty, but the ability of modern data science to provide scary-smart predictive analysis goes well beyond what an army of humans could do manually.”


DevOps Observability from Code to Cloud

DevOps has transformed itself in the last few years, completely changing from what we used to see as siloed tools connected together to highly integrated, single-pane-of-glass platforms. Collaboration systems like JIRA, Slack, and Microsoft Teams are connected to your observability tools such as Datadog, Dynatrace, Splunk, and Elastic. Finally, IT Service management tools like PagerDuty are also connected in. Tying these high-in-class tools together on one platform, such as the JFrog Platform, yields high value to the enterprises looking for observability workflow. The security folks also need better visibility into an enterprise’s systems, to look for vulnerabilities. A lot of this information is available in Artifactory and Amazon Web Services‘ Xray, but how do we leverage this information in other partner systems like JIRA and Datadog? It all starts with JFrog Xray’s security impact, where we can generate the alert to Slack and robust security logs to Datadog to be analyzed by your Site Reliability Engineer. A PagerDuty incident that’s also generated from Xray can then be used to create a JIRA issue quickly.


6 Global Megatrends That Are Impacting Banking’s Future

The line between digital and physical has blurred, with consumers who once preferred brick-and-mortar engagements now researching, shopping and buying using digital channels more than ever. This trend is expected to increase across all industries. While organizations have enabled improved digital engagement over the past several months, there are still major pain points, mostly with speed, simplicity and cross-channel integration during the ‘first mile’ of establishing a relationship. The retail industry already understands that consumers are becoming increasingly impatient, wanting the convenience and transparency of eCommerce and the service and humanization of physical stores. In banking, consumers are diversifying their financial relationships, moving to fintech and big tech providers that can open relationships in an instant and personalize experiences. According to Brett King, founder of Moven and author of the upcoming book, ‘The Rise of Technosocialism’, “The ability to acquire new customers at ‘digital scale’ will impact market share and challenge existing budgets for branches. ..."


Understanding Contextual AI In The Modern Business World

Contextual AI can be divided into three pillars that help make businesses become more visible to the people they want to reach. In the same sense, when a business is looking for a partner, it has to be sure that a prospect can offer the right services to fulfill its goals. Contextual AI aims to deliver that. The technology allows a brand to enhance its understanding of consumer interests. It is easy to make assumptions about consumer interests in different sectors, but difficult to prove them. ... In previous years, contextual AI was seen as an enhancing technology, but not an essential one. Now, the recognition of contextual AI as more than simply enhancing is growing. Businesses are constantly looking for more cost-effective solutions to their problems, and contextual AI offers one solution to fit that bracket. If you look at a similar alternative, such as behavioral advertising, it is heavily reliant on data — and lots of it. The huge amounts of data required to make this a success means that businesses have to implement a successful collection, analysis and then reporting solution in order to leverage it effectively. This can be a costly process if a business does not have large economies of scale.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - July 26, 2021

CIOs and CFOs: creating a value-driven partnership

The CFO/CIO relationship is evolving in the UK and elsewhere in the world. The digitisation of everything is forcing both functions to recognise that technology is not just integral to running the business efficiently, but also permeates every aspect of business strategy and how companies define competitive advantage. Consequently, technology is exerting much greater influence on the way CFOs and CIOs think about their roles and how they define value for their organisations. ... “Technology is expanding the roles that CFOs and CIOs play in an organisation…”. It implies the need for closer collaboration between IT and finance in this country. If both roles collaborate and ask meaningful questions of each other, their shared expertise will enable them to better understand their contribution to delivering value for the business and how their combined skillsets can leverage the benefits of digitisation to become more productive. Yet, not all is sweetness and success, because traditionally both functions have come from very different standpoints when it comes to what value means to their organisations: “While the CFO-CIO relationship is interconnected, sometimes it can become divided, as both often speak different ‘languages’ about the same topic”.


Ignore API security at your peril

Many organizations are quick to embrace the potential and possibilities of connected devices and apps. However, they frequently neglect to put in place the right technology and processes needed to make their APIs secure. Understanding APIs in terms of private/partner/public differences and understanding that these are not the same as internal/external is just the start. Organizations should have both an API strategy and a well-managed API management platform in place so that before teams expose APIs to anybody, a thorough security review is undertaken before rolling out certain API designs. Similarly, any identified issue needs to be handled in a highly structured way. This includes conducting a full assessment of the impact and scope of reported vulnerabilities and having processes in place to ensure that all these issues are then resolved in a timely manner to prevent bigger problems arising further down the road. As organizations push ahead with using APIs to power up digital transformation and deploy a new generation app-based services, so the risk of unauthorized access and data exposure is growing.


AI Liability Risks to Consider

Most AI systems are not autonomous. They provide results, they make recommendations, but if they're going to make automatic decisions that could negatively impact certain individuals or groups (e.g., protected classes), then not only should a human be in the loop, but a group of individuals who can help identify the potential risks early on such as people from legal, compliance, risk management, privacy, etc. ... It states, "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profile, which produces legal effects concerning him or her similarly significantly affects him or her." While there are a few exceptions, such as getting the user's express consent or complying with other laws EU members may have, it's important to have guardrails that minimize the potential for lawsuits, regulatory fines and other risks. "You have people believing what is told to them by the marketing of a tool and they're not performing due diligence to determine whether the tool actually works," said Devika Kornbacher, a partner at law firm Vinson & Elkins. "Do a pilot first and get a pool of people to help you test the veracity of the AI output – data science, legal, users or whoever should know what the output should be."


Digital transformation: 3 priorities for CIOs facing a tough climb

Leading a successful digital transformation is like leading a mountain climbing expedition: It takes courage, leadership, and perseverance. Consider these tips from a leader who's done both ... Imagine boiling the ocean in one day. That’s how digital transformation feels sometimes. The psychological impact becomes unbearable and overwhelming. By preparing and staying the course, however, digital transformation becomes an achievable feat with lasting outcomes. In the case of our climb, preparing meant wearing the right clothes, packing the right things, communicating with each other, trusting one another, fuelling ourselves with energy bars, breaking down the path into smaller chunks, and learning about the road ahead. As a leader, I ventured to turn our performance up that mountain from mediocre to exceptional. In digital transformation, this may mean upskilling the workforce and adopting new platforms. ... Climbing Mount Hood was precarious and mentally and physically difficult. I never wavered. I stuck to our goal because I knew the outcome would benefit everyone in my family. To soldier on, you must be that persistent.


How to Secure Your Cryptocurrency Wallet

Owners of Bitcoin, Ethereum, and other cryptocurrency typically trade on centralized platforms such as Robinhood, Coinbase, FTX, and others. They don't need to worry about creating and managing digital wallets since the platform handles those tasks. That's the convenience of a centralized platform. However, there are serious drawbacks to keeping your crypto assets on a platform. If the platform gets hacked, or your account credentials are stolen, or the government decides to seize your digital assets, you could lose all of your crypto investments. If you would rather not rely on these platforms to secure your digital assets and prefer not to be subject to their policies, it's better to move your digital assets off of the platform and to where you can have full control. Centralized platforms are the on-ramps to purchase digital assets with dollars. Once you make the purchase, you can take custody of your assets by transferring them to your wallet. Decentralized applications (dapp), on the other hand, require users to hold funds in their own wallet. Decentralized finance (DeFi) – such as lending, borrowing, insurance – requires using a digital wallet. DeFi is only slowly becoming available to users of centralized platforms.


How to Work Better Together: Building DEI Awareness in Tech

Increasingly, we also gatekeep on existing experience. By that I mean the problem that those new to our industry experience when they need to "get experience to get experience". This happens when entry level roles already require some number of years of experience as a condition of hire. Without "year 0" opportunities, then the only people in the available job pool will be people already behind the gate and that number will decrease over time as people change industries, retire, or even want to go on holidays or sabbaticals. Perception of what success looks like is also a major barrier to success. A great example is the previous section, where I outlined groups of people who are not normally included in dress code; not normally actively, but rather invisibly due to lack of representation or lack of awareness of those currently in the majority. A way to start self testing for this is to see what comes to mind when I say "successful engineer", "manager", or "CEO". Specially: what do the people in those roles look like and sound like, by default, in your mind’s eye?


Australia Says Uber 'Interfered' With Users' Privacy

The OAIC action comes almost five years after Uber's systems were infiltrated by attackers who stole user data. Uber's cover up of the incident spurred outrage, inquiries and action by several regulators worldwide. Two attackers obtained login credentials from a private GitHub site that was used by some of Uber's engineers. They then used those login credentials to access an Amazon Web Services account that had an archive with rider and driver information. All told, there were 57 million accounts exposed. The data affected included names, email addresses and phone numbers for Uber customers as well as personal information of 7 million drivers and 600,000 driver's license numbers. Uber paid $100,000 in bitcoin to the two attackers and positioned the payment as a bug bounty. Uber did not reveal the breach until more than a year later in November 2017. Shortly after that disclosure, Uber fired Joe Sullivan, its CSO. Sullivan, who is now CSO for Cloudflare, was charged in the U.S. with obstruction of justice and misprision, which is the deliberate concealment of a felony or treasonable act.


CISO Interview Series: How Aiming for the Sky Can Help Keep Your Organization Secure

Visibility is key to understanding your landscape, to understanding what ‘your organizational landscape’ and world looks like. The capability I would invest in is looking at your cyber risk profile, ensuring that you understand your risks. If you understand your risks, then you can help translate that across the business. Or it doesn’t need to be translated. It’s already done for you because you’ve got it in a risk profile that the business understands because the business will essentially dictate that.
Once you understand your risk profile, that gives you actions you can work towards. Even if you’re using a risk framework, without a good risk assessment, you can be working on stuff that doesn’t really add value or isn’t a problem. Understanding your landscape is what gives the visibility. Focus on your basics and get your policies and processes in place so that there is structure that everyone can work from. As an example, we work to four area: governance, risk, and compliance; security operations center; secure architecture; and secure infrastructure. They acre the four pillars we align to. What that means is your secure infrastructure is critical.


Health Care’s Digital Transformation: Three Trends To Watch For

A shift is happening within our health care system that is allowing more and more data to enter the health system. According to Capital Markets, 30% of the world’s data volume is being generated by the health care industry, and by 2025, the compound annual growth rate of data for health care will reach 36%. Health care organizations must develop a plan to manage this data and integrate it with SDoH data, AI-fueled behavioral science, patient history and more to facilitate a more proactive approach to care. Value-based care — a buzzword for years now that emphasizes preventative care — may finally be within reach if health care leaders are able to harness this data and integrate it into clinical workflows. Like the health care system itself, these topics are interwoven and complex. Overcoming these challenges will require hard work and dedication from the entire health care industry, but I am confident we are making incredible strides. We’re seeing cloud adoption that would have been unimaginable just 18 months ago. 


Re-focusing your tech strategy post-Covid

Too often businesses forget about the importance of measuring these KPIs long-term – in fact, research carried out last year by AppLearn found that just 12 per cent of organisations measure the success of their technology investments after one year, falling to five per cent after three years. When you consider the time and money ploughed into software roll outs, these stats are shocking. But there’s also the fact that software evolves and the way users interact with it can change, especially with major updates – this makes assessing the performance and value of investments beyond the first few years of implementation just as important. In the age of the digital workplace, data is king and will give business leaders greater insights into the technologies used and the end-to-end employee experience. To maintain productivity in the long-term, you must move beyond surface level vanity metrics and gather intelligent data points – this could be time spent navigating tasks within applications, task error/completion rates, what pages users have visited or where they’ve looked for support.



Quote for the day:

"We are reluctant to let go of the belief that if I am to care for something I must control it." -- Peter Block

Daily Tech Digest - July 25, 2021

Discord CDN and API Abuses Drive Wave of Malware Detections

Discord’s CDN is being abused to host malware, while its API is being leveraged to exfiltrate stolen data and facilitate hacker command-and-control channels, Sophos added. Because Discord is heavily trafficked by younger gamers playing Fortnite, Minecraft and Roblox, a lot of the malware floating around amounts to little more than pranking, such as the use of code to crash an opponent’s game, Sophos explained. But the spike in info stealers and remote access trojans is more is more alarming, it added. “But the greatest percentage of the malware we found have a focus on credential and personal information theft, a wide variety of stealer malware as well as more versatile RATs,” the report said. “The threat actors behind these operations employed social engineering to spread credential-stealing malware, then use the victims’ harvested Discord credentials to target additional Discord users.” The team also found outdated malware including spyware and fake app info stealers being hosted on the Discord CDN.


The sixth sense of a successful leader

The Sixth Sense endowed Leader has to possess a highly developed awareness of what needs to be done, how it needs to be done, when it needs to be done, simultaneously anticipating the needs of the human resource involved on the task, and continuously visualising the anticipated outcome. For successful employment of sixth sense the Leader needs to work on the Higher Intellect plane. This does not preclude the Leader from seeking material gains, for that is the ultimate aim of any business. However, the Leader needs to weigh the anticipated gains against likely social and environment degradation. Similarly, the Leader needs to be steeped in definable values and ethics, which in turn act as the Sixth Sense Pillar. This Pillar will be the fulcrum enabling the Leader to leverage gains beyond cognitive reasoning, and to attain the status of a Karma Yogi. The Sixth Sense Leader, a true Karma Yogi, empowers self to develop: – Vision to create rather than await opportunity, by tapping dimensional awareness of the future. Analysing and risk acceptance capability, through capacity to subtly induce change in energy fields impacting the mission.

Why Data Management Needs An Aggregator Model

As enterprises shift to a hybrid multicloud architecture, they can no longer manage data within each storage silo, search for data within each storage silo and pay a heavy cost to move data from one silo to another. As GigaOm analyst Enrico Signoretti pointed out: "The trend is clear: The future of IT infrastructures is hybrid ... [and] it requires a different and modern approach to data management." Another key reason an aggregator model for data management is needed is that customers want to extract value from their data. To analyze and search unstructured data, vital information is stored in what is called "metadata" — information about the data itself. Metadata is like an electronic fingerprint of the data. For example, a photo on your phone might have information about the time and location when it was taken as well as who was in it. Metadata is very valuable, as it is used to search, find and index different types of unstructured data. Since storage business models are built on owning the data, storage vendors will move some blocks when moving data to the cloud rather than move all of the data.

Next-Gen Data Pipes With Spark, Kafka and k8s

In Lambda Architecture, there are two main layers – Batch and Speed. The first one transforms data in scheduled batches whereas the second is responsible for near real-time data processing. The batch layer is typically used when the source system sends the data in batches, access to the entire dataset is needed for required data processing, or the dataset is too large to be handled as a stream. On the contrary, stream processing is needed for small packets of high-velocity data, where the packets are either mutually independent or packets in close vicinity form a context. Naturally, both types of data processing are computation-intensive, though the memory requirement for batch is higher than the stream layer. Architects look for solution patterns that are elastic, fault-tolerant, performing, cost-effective, flexible, and, last but not least – distributed. ... Lambda architecture is complex because it has two separate components for handling batch and stream processing of data. The complexity can be reduced if one single technology component can serve both purposes.


Moving fast and breaking things cost us our privacy and security

Tokenized identification puts the power in the user’s hands. This is crucial not just for workplace access and identity, but for a host of other, even more important reasons. Tokenized digital IDs are encrypted and can only be used once, making it nearly impossible for anyone to view the data included in the digital ID should the system be breached. It’s like Signal, but for your digital IDs. As even more sophisticated technologies roll out, more personal data will be produced (and that means more data is vulnerable). It’s not just our driver’s licenses, credit cards or Social Security numbers we must worry about. Our biometrics and personal health-related data, like our medical records, are increasingly online and accessed for verification purposes. Encrypted digital IDs are incredibly important because of the prevalence of hacking and identity theft. Without tokenized digital IDs, we are all vulnerable. We saw what happened with the Colonial Pipeline ransomware attack recently. It crippled a large portion of the U.S. pipeline system for weeks, showing that critical parts of our infrastructure are extremely vulnerable to breaches.


Agile at 20: The Failed Rebellion

In some ways, Agile was a grassroots labor movement. It certainly started with the practitioners on the ground and got pushed upwards into management. How did this ever succeed? It’s partially due to developers growing in number and value to their businesses, gaining clout. But the biggest factor, in my view, is that the traditional waterfall approach simply wasn’t working. As software got more complicated and the pace of business accelerated and the sophistication of users rose, trying to plan everything up front became impossible. Embracing iterative development was logical, if a bit scary for managers used to planning everything. I remember meetings in the mid-2000s where you could tell management wasn’t really buying it, but they were out of ideas. What the hell, let’s try this crazy idea the engineers keep talking about. We’re not hitting deadlines now. How much worse can it get? Then to their surprise, it started working, kind of, in fits and starts. Teams would thrash for a while and then slowly gain their legs, discovering what patterns worked for that individual team, picking up momentum.


Is Consciousness Bound by Quantum Physics? We're Getting Closer to Finding Out

We're not yet able to measure the behavior of quantum fractals in the brain – if they exist at all. But advanced technology means we can now measure quantum fractals in the lab. In recent research involving a scanning tunneling microscope (STM), my colleagues at Utrecht and I carefully arranged electrons in a fractal pattern, creating a quantum fractal. When we then measured the wave function of the electrons, which describes their quantum state, we found that they too lived at the fractal dimension dictated by the physical pattern we'd made. In this case, the pattern we used on the quantum scale was the SierpiƄski triangle, which is a shape that's somewhere between one-dimensional and two-dimensional. This was an exciting finding, but STM techniques cannot probe how quantum particles move – which would tell us more about how quantum processes might occur in the brain. So in our latest research, my colleagues at Shanghai Jiaotong University and I went one step further. Using state-of-the-art photonics experiments, we were able to reveal the quantum motion that takes place within fractals in unprecedented detail.


How Deepfakes Are Powering a New Type of Cyber Crime

Cybercriminals are always quick to leap onto any bandwagon that they can use to improve or modernize their attacks. Audio fakes are becoming so good that it requires a spectrum analyzer to definitively identify fakes, and AI systems have been developed to identify deepfake videos. If manipulating images lets you weaponize them, imagine what you can do with sound and video fakes that are good enough to fool most people. Crimes involving faked images and audio have already happened. Experts predict that the next wave of deepfake cybercrime will involve video. The working-from-home, video-call-laden “new normal” might well have ushered in the new era of deepfake cybercrime. An old phishing email attack involves sending an email to the victim, claiming you have a video of them in a compromising or embarrassing position. Unless payment is received in Bitcoin the footage will be sent to their friends and colleagues. Scared there might be such a video, some people pay the ransom.


5 Steps to Improving Ransomware Resiliency

Enterprises need to have a robust endpoint data protection and system security. This includes antivirus software and even whitelisting software where only approved applications can be accessed. Enterprises need both an active element of protection, and a reactive element of recovery. Companies hit with a ransomware attack can spend five days or longer recovering from an attack, so it’s imperative that companies are actively implementing the right backup and recovery strategies before a ransomware attack. Black hats who are developing ransomware are trying to prevent any means of egress from an enterprise having to pay the ransom. ... We urge organizations to implement a more comprehensive backup and recovery approach based on the National Institute of Standards and Technology (NIST) Cybersecurity Framework. It includes a set of best practices: Using immutable storage, which prevents ransomware from encrypting or deleting backups; implementing in-transit and at-rest encryption to prevent bad actors from compromising the network or stealing your data; and hardening the environment by enabling firewalls that restrict ports and processes.


This Week in Programming: Kubernetes from Day One?

“To move to Kubernetes, an organization needs a full engineering team just to keep the Kubernetes clusters running, and that’s assuming a managed Kubernetes service and that they can rely on additional infrastructure engineers to maintain other supporting services on top of, well, the organization’s actual product or service,” they write. While this is part of StackOverflow’s reasoning — “The effort to set up Kubernetes is less than you think. Certainly, it’s less than the effort it would take to refactor your app later on to support containerization.” — Ably argues that “it seems that introducing such an enormously expensive component would merely move some of our problems around instead of actually solving them.” Meanwhile, another blog post this week argues that Kubernetes is our generation’s Multics, again centering on this idea of complexity. Essentially, the argument here is that Kubernetes is “a serious, respectable, but overly complex system that will eventually be replaced by something simpler: the Unix of distributed operating systems.” Well then, back to Unix it is!



Quote for the day:

"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis

Daily Tech Digest - July 24, 2021

Quantum entanglement-as-a-service: "The key technology" for unbreakable networks

Like classical networks, quantum networks require hardware-independent control plane software to manage data exchange between layers, allocate resources and control synchronization, the company said. "We want to be the Switzerland of quantum networking," said Jim Ricotta, Aliro CEO. Networked quantum computers are needed to run quantum applications such as physics-based secure communications and distributed quantum computing. "A unified control plane is one of several foundational technologies that Aliro is focused on as the first networking company of the quantum era," Ricotta said. "Receiving Air Force contracts to advance this core technology validates our approach and helps accelerate the time to market for this and other technologies needed to realize the potential of quantum communication." Entanglement is a physical phenomenon that involves tiny things such as individual photons or electrons, Ricotta said. When they are entangled, "then they become highly correlated" and appear together. It doesn't matter if they are hundreds of miles apart, he said.


Design for Your Strengths

Strengths and weaknesses are often mirrors of each other. My aerobic weakness had, as its inverse, a superstrength of anaerobic power. Indeed, these two attributes often go hand in hand. Finally, I had figured out how to put this to use. After the Lillehammer Olympics, I dropped out of the training camp. But I was more dedicated than ever to skating. I moved to Milwaukee, and without the financial or logistical support of the Olympic Committee, began a regimen of work, business school, and self-guided athletic training. I woke up every day at 6 a.m. and went to the rink. There I put on my pads and blocks and skated from 7 until 9:30. Then I changed into a suit for my part-time job as an engineer. At 3 p.m., I left work in Milwaukee and drove to the Kellogg Business School at Northwestern, a two-hour drive. I had class from 6 to 9 p.m., usually arrived home at 11, and lifted weights until midnight. I did that every day for two and a half years. Many people assume that being an Olympic athlete requires a lot of discipline. But in my experience, the discipline is only physical. 


‘Next Normal’ Approaching: Advice From Three Business Leaders On Navigating The Road Ahead

With some analysts predicting a "turnover tsunami" on the horizon, talent strategy has taken on a new sense of urgency. Lindsey Slaby, Founding Principal of marketing strategy consultancy Sunday Dinner, focuses on building stronger marketing organizations. She shares: Organizations are accelerating growth by attracting new talent muscle and re-skilling their existing teams. A rigorous approach to talent has never been as important as it is right now. The relationship between employer and employee has undergone significant recalibration the last year with the long-term impact of the nation’s largest work-from-home experiment yet to come into clear view. But much like the Before Times, perhaps the greatest indicator of how an organization will fare on the talent front comes down to how it invests in its people and specifically their future potential. Slaby believes there is a core ingredient to any winning talent strategy: Successful organizations prioritize learning and development. Training to anticipate the pace of change is essential. It is imperative that marketers practice ‘strategy by doing’ and understand the underlying technology that fuels their go-to-market approach.


The Beauty of Edge Computing

The volume and velocity of data generated at the edge is a primary factor that will impact how developers allocate resources at the edge and in the cloud. “A major impact I see is how enterprises will manage their cloud storage because it’s impractical to save the large amounts of data that the Edge creates directly to the cloud,” says Will Kelly, technical marketing manager for a container security startup (@willkelly). “Edge computing is going to shake up cloud financial models so let’s hope enterprises have access to a cloud economist or solution architect who can tackle that challenge for them.” With billions of industrial and consumer IoT devices being deployed, managing the data is an essential consideration in any edge-to-cloud strategy. “Advanced consumer applications such as streaming multiplayer games, digital assistants and autonomous vehicle networks demand low latency data so it is important to consider the tremendous efficiencies achieved by keeping data physically close to where it is consumed,” says Scott Schober, President/CEO of Berkeley Varitronics Systems, Inc. (@ScottBVS).


Facebook Makes a Big Leap to MySQL 8

The company skipped entirely upgrading to MySQL 5.7 release, the major release between 5.6 and 8.0. At the time, Facebook was building its custom storage engine, called MyRocks, for MySQL and didn’t want to interrupt the implementation process, the engineers write. MyRocks is a MySQL adaptation for RocksDB, a storage engine optimized for fast write performance that Instagram built for to optimize Cassandra. Facebook itself was using MyRocks to power its “user database service tier,” but would require some features in MySQL 8.0 to fully support such optimizations. Skipping over version 5.7, however, complicated the upgrade process. “Skipping a major version like 5.7 introduced problems, which our migration needed to solve,” the engineers admitted in the blog post. Servers could not simply be upgraded in place. They had to use logical dump to capture the data and rebuild the database servers from scratch — work that took several days in some instances. API changes from 5.6 to 8.0 also had to be rooted out, and supporting two major versions within a single replica set is just plain tricky. 


Research shows AI is often biased. Here's how to make algorithms work for all of us

Inclusive design emphasizes inclusion in the design process. The AI product should be designed with consideration for diverse groups such as gender, race, class, and culture. Foreseeability is about predicting the impact the AI system will have right now and over time. Recent research published by the Journal of the American Medical Association (JAMA) reviewed more than 70 academic publications based on the diagnostic prowess of doctors against digital doppelgangers across several areas of clinical medicine. A lot of the data used in training the algorithms came from only three states: Massachusetts, California and New York. Will the algorithm generalize well to a wider population? A lot of researchers are worried about algorithms for skin-cancer detection. Most of them do not perform well in detecting skin cancer for darker skin because they were trained primarily on light-skinned individuals. The developers of the skin-cancer detection model didn't apply principles of inclusive design in the development of their models.


Leverage the Cloud to Help Consolidate On-Prem Systems

The recommended approach is to "create or recreate" a representation of the final target system in-the-cloud, but not re-engineer any components into cloud-native equivalents. The same number of LPARs, same memory/disk/CPU allocations, same file system structures, same exact IP addresses, same exact hostnames, and network subnets are created in the cloud that represents as much as possible a "clone" of the eventual system of record that will exist on-prem. The benefit of this approach is that you can apply "cloud flexibility" to what was historically a "cloud stubborn" system. Fast cloning, ephemeral longevity, software-defined networking, API automation can all be applied to the temporary stand-in running in the cloud. As design principles are finalized based on research performed on the cloud version of the system, those findings can be applied to the on-prem final buildout. To jump-start the cloud build-out process, it is possible to reuse existing on-prem assets as the foundation for components built in the cloud. LPARs in the cloud can be based on existing mksysb images already created on-prem. 


Scaling API management for long-term growth

To manage the complexity of the API ecosystem, organisations are embracing API management tools to identify, control, secure and monitor API use in their existing applications and services. Having visibility and control of API consumption provides a solid foundation for expanding API provision, discovery, adoption and monetisation. Many organizations start with an in-house developed API management approach. However, as their API management strategies mature, they often find the increasing complexity of maintaining and monitoring the usage of APIs, and the components of their API management solution itself, a drain on technical resources and a source of technical debt. A common challenge for API management approaches is becoming a victim of one’s own success. For instance, a company that deploys an API management solution for a region or department may quickly get requests for access from other teams seeking to benefit from the value delivered, such as API discoverability and higher service reliability. While this demand should be seen as proof of a great approach to digitalization, it adds challenges and raises questions for example around capacity, access control, administration rights and governance.


Is Blockchain the Ultimate Cybersecurity Solution for My Applications?

Blockchain can provide a strong and effective solution for securing networked ledgers. However, it does not guarantee the security of individual participants or eliminate the need to follow other cybersecurity best practices. Blockchain application depends on external data or other at-risk resources; thus, it cannot be a panacea. The blockchain implementation code and the environments in which the blockchain technology run must be checked for cyber vulnerabilities. Blockchain technology provides stronger, transactional security than traditional, centralized computing services for secured networked transaction ledger. For example, say I use distributed ledger technology (DLT), an intrinsic blockchain feature, while creating my blockchain-based application. DLT increases cyberresiliency because it creates a situation where there is no single point of contact. In the DLT, an attack on one or a small number of participants does not affect other nodes. Thus, DLT helps maintain transparency and availability, and continue the transactions. Another advantage of DLT is that endpoint vulnerabilities are addressed.


Why bigger isn’t always better in banking

Of course, there are outliers. One of them is Brown Brothers Harriman, a merchant/investment bank that traces its origins back some 200 years, and that is the subject of an engaging new book, Inside Money. Historian Zachary Karabell (disclosure: we were graduate school classmates in the last millennium) offers not just an intriguing family and personal history, but a lesson in how to balance risk and ambition against responsibility and longevity—and in why bigger isn’t always better. The firm’s survival is even more remarkable given that US financial history often reads as a string of booms, bubbles, busts, and bailouts. The Panic of 1837. The Panic of 1857. The Civil War. The Panic of 1907. The Great Depression. The Great Recession of 2008. In finance, leverage—i.e., debt—is the force that allows companies to lift more than they could under their own power. It’s also the force that can crush them when circumstances change. And Brown Brothers has thrived in part by avoiding excessive leverage. Today, the bank primarily “acts as a custodian for trillions of dollars of global assets,” Karabell writes. “Its culture revolves around service.”



Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek

Daily Tech Digest - July 23, 2021

The CISO: the enabler of innovation

With digital transformation already in focus for many businesses, adding a now distributed workforce on top of this scenario ratchets up the security challenge. One in five CEOs and CISOs saw a major increase in all types of cyber attacks since COVID-19, with supply chain attacks topping the table side by side with ransomware. The key here is to enable and drive businesses, rather than impede them. By moving to support remote workers by adjusting policies and controls discreetly, businesses can enable teams to work better in their own role in their own job. This means allowing them to access data from anywhere while providing better visibility 24/7, enabling more proactive alerts and controls. In fact, 58% of CEOs and CISOs have recognised the need for a more integrated trust framework, with 48% also substantially increasing the use of cloud-based cyber security systems. In the future, the workforce will have even more autonomy within the decentralised cultures that develop as business leaders find new ways to drive collaboration and creativity. For the CISO, this means continuous adapting to an evolving workplace.


Why unstructured data is the future of data management

Today, data is a valuable corporate asset. You’ve got to be strategic with it because it’s not just for your BI teams, but for the R&D and customer success teams. They need historical data to build new products or to improve the ones they already have. This is super relevant in manufacturing, such as in the semiconductor chip industry, but also in other industries that are so important to our economy, such as pharmaceuticals. COVID researchers depended upon access to SARS data when developing vaccines and treatments. Data often becomes valuable again later, and what if you don’t know what you have or you can’t find it? We’ve had customers in the media and entertainment business, and in the past when they wanted to find an old show, they’d need access to a tape archive. Then, they needed an asset tag to locate the tape. That can be very difficult, and it’s why archiving is not popular. Live archive solutions that are available today make archived data instantly accessible and transparently tier data so users can easily locate files and access them anytime.


Here’s how to check your phone for Pegasus spyware using Amnesty’s tool

The first thing to note is the tool is command line or terminal based, so it will take either some amount of technical skill or a bit of patience to run. We try to cover a lot of what you need to know to get up and running here, but it’s something to know before jumping in. The second note is that the analysis Amnesty is running seems to work best for iOS devices. In its documentation, Amnesty says the analysis its tool can run on Android phone backups is limited, but the tool can still check for potentially malicious SMS messages and APKs. Again, we recommend following its instructions. ... If you’re using a Mac to run the check, you’ll first need to install both Xcode, which can be downloaded from the App Store, and Python3 before you can install and run mvt. The easiest way to obtain Python3 is using a program called Homebrew, which can be installed and run from the Terminal. After installing these, you’ll be ready to run through Amnesty’s iOS instructions. If you run into issues while trying to decrypt your backup, you’re not alone. The tool was giving me errors when I tried to point it to my backup, which was in the default folder. 


Critical Jira Flaw in Atlassian Could Lead to RCE

The vulnerability has to do with a missing authentication check in Jira’s implementation of Ehcache, which is an open-source, Java distributed cache for general-purpose caching, Java EE and lightweight containers that’s used for performance and which simplifies scalability. Atlassian said that the bug was introduced in version 6.3.0 of Jira Data Center, Jira Core Data Center, Jira Software Data Center and Jira Service Management Data Center (known as Jira Service Desk prior to 4.14). According to Atlassian’s security advisory, that list of products exposed a Ehcache remote method invocation (RMI) network service that attackers – who can connect to the service on port 40001 and potentially 40011 – could use to “execute arbitrary code of their choice in Jira” through deserialization, due to missing authentication. RMI is an API that acts as a mechanism to enable remote communication between programs written in Java. It allows an object residing in one Java virtual machine (JVM) to invoke an object running on another JVM; Often, it involves one program on a server and one on a client. ...”


Improving Your Productivity With Dynamic Problems

First, a Huffman code tree is built. Let the original alphabet consist of n characters, the i-th of which occurs pi times in the input text. Initially, all symbols are considered active nodes of the future tree, the i-th node is marked with pi. At each step, we take two active vertices with the smallest labels, create a new vertex, labeling it with the sum of the labels of these vertices, and make it their parent. The new vertex becomes active, and its two children are removed from the list of active vertices. The process is repeated many times until only one active vertex remains, which is assumed to be the root of the tree. Note that the symbols of the alphabet are represented by the leaves of this tree. For each leaf (symbol), the length of its Huffman code is equal to the length of the path from the root of the tree to it. The code itself is constructed as follows: for each internal vertex of the tree, consider two arcs going from it to the children. We assign the label 0 to one of the arcs, and to the other 1. The code of each symbol is a sequence of zeros and ones on the path from the root to the leaf.


Top 5 NCSC Cloud Security Principles for Compliance

Modern business IT infrastructures are complex, and data regularly moves between different across the network. It’s critical to protect sensitive data belonging to your customers and employees as it traverses between business applications/devices and the cloud. It’s also imperative that your cloud vendor protects data in transit inside the cloud such as when data is replicated to a different region to ensure high availability. ... Different regulations have different requirements about where protected data can be stored. For example, some regulations stipulate that data can only be transferred to companies with sufficient levels of protection in processing personal data. If your business opts for a cloud provider that doesn’t provide transparency over the location of data, you could end up unknowingly in breach of regulations. ... The last thing your business wants is to use a public cloud service only to find that a malicious hacker accessed your sensitive data by compromising another customer first. This type of concerning non-compliance scenario can happen when there is an insufficient separation between different customers of a cloud service.


Data and Analytics Salaries Heat Up in Recovery Economy

There are a few reasons why the market is really strong for data scientist and analytics pros right now. First, we are coming off a period of stagnation where no one wanted to change jobs and salaries stayed the same. That means those individuals who were considering a job change most likely put those plans on hold during the pandemic. Now all those people are getting back into the market. Second, there are so many new remote job opportunities, which opens up a whole new realm of job possibilities for data science and analytics pros. Third, as people move on to new jobs, they create vacancies where they were, opening up additional job vacancies. Fourth, there are some industries that had to change their business models to continue to operate during the pandemic economy. Burtch Works specifically points to retail, which had to enable digital channels to replace sales lost in brick-and-mortar stores. The Burtch Works report notes that many retailers have been expanding their data science and analytics teams and offering higher compensation than Burtch Works has typically seen in retail.


Home-office networks demand better monitoring tools

Networking professionals said they are enhancing their network operations toolsets in three primary ways. First, 54.2% are looking for tools that deliver security-related insights into home-office environments. This will help them collaborate with security teams to ensure that their increasing distributed networks are compliant with policies. It will also help them discern whether a user-experience issue is related to a security problem. Second, 52.6% need new dashboard and reporting features that allow them to focus on home offices and remote workers, which will help admins and engineers spot problems and troubleshoot them more efficiently. If their existing tools lack adequate dashboard and reporting customization, they’ll have to look elsewhere for this view into their networks. Third, 49.4% need to upgrade the scalability of their tools. ... Network teams will need to integrate their tools with other systems to improve their ability to support home workers. For instance, 43% said home-office monitoring requirements are producing a need for their monitoring tools to integrate with their SD-WAN or secure access service edge (SASE) solution. 


Hybrid work: 7 ways to enable asynchronous collaboration

One of the main differences between asynchronous and synchronous work is that the former tends to center on time- or task-defied work processes. “Asynchronous work requires a grasp of what the outcome – the final product of work – needs to be, as opposed to the amount of time spent in close coordination producing the final product,” says Dee Anthony, director at global technology research and advisory firm ISG. IT leaders need to get better at defining, managing, and measuring outcomes. Anthony suggests taking a page out of the agile playbook: Identify outcomes, estimate the effort required to accomplish them, track work velocity, and perform regular reviews. You must also foster a culture of trust. “Having people work across time, even in the same country, means that the old nine-to-five is out the window,” says Iain Fisher, director at ISG. "Managers cannot be there all the time, so a culture change of trust and respect must evolve." ... “Working asynchronously requires very strong written communication skills to avoid ambiguity and misunderstanding,” says Lars Hyland


Outcome Mapping - How to Collaborate With Clarity

The Outcome Map is an excellent way to create energetic communication, clarity, and alignment from the start (or re-start) of any initiative. It also reminds you to stay on track as you progress, and how to know when we’re drifting from the path. By adding measurements and methods, you can describe where you want to go and how you plan to get there. In both a project and product approach, clarity of outcomes is critical, but what’s often forgotten are the factors affecting the odds of achieving the outcome. Outcome mapping allows us to explore, anticipate, and design mitigation approaches to factors impacting our desired outcome. For this reason, it’s also commonly referred to as impact mapping. In practice, you can map many factors involved in a given outcome, but a few critical ingredients should always be present. Defining measures (or indicators) of progress (summarized as ‘Measures’ in the map itself) allows you to measure and celebrate progress without waiting until the distant deadline of your primary outcome to find out if you’ve succeeded or failed.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard