Daily Tech Digest - May 22, 2022

6 business risks of shortchanging AI ethics and governance

When enterprises build AI systems that violate users’ privacy, that are biased, or that do harm to society, it changes how their own employees see them. Employees want to work at companies that share their values, says Steve Mills, chief AI ethics officer at Boston Consulting Group. “A high number of employees leave their jobs over ethical concerns,” he says. “If you want to attract technical talent, you have to worry about how you’re going to address these issues.” According to a survey released by Gartner earlier this year, employee attitudes toward work have changed since the start of the pandemic. Nearly two-thirds have rethought the place that work should have in their life, and more than half said that the pandemic has made them question the purpose of their day job and made them want to contribute more to society. And, last fall, a study by Blue Beyond Consulting and Future Workplace demonstrated the importance of values. According to the survey, 52% of workers would quit their job — and only 1 in 4 would accept one — if company values were not consistent with their values. 


The Never-Ending To-Do List of the DBA

Dealing with performance problems is usually the biggest post-implementation nightmare faced by DBAs. As such, the DBA must be able to proactively monitor the database environment and to make changes to data structures, SQL, application logic, and the DBMS subsystem itself in order to optimize performance. ... Applications and data are more and more required to be up and available 24 hours a day, seven days a week. Globalization and e-business are driving many organizations to implement no-downtime, around-the-clock systems. To manage in such an environment, the DBA must ensure data availability using non-disruptive administration tactics. ... Data, once stored in a database, is not static. The data may need to move from one database to another, from the DBMS into an external data set, or from the transaction processing system into the data warehouse. The DBA is responsible for efficiently and accurately moving data from place to place as dictated by organizational needs. ... The DBA must implement an appropriate database backup and recovery strategy for each database file based on data volatility and application availability requirements. 


The brave, new world of work

The recent disruptions to the physical workplace have highlighted the importance of the human connections that people make on the job. In an excerpt from her new book, Redesigning Work, Lynda Gratton of the London Business School plays off an insight made nearly 50 years ago by sociologist Mark Granovetter. Granovetter famously discussed the difference between “weak” and “strong” social ties and showed that, when it came to finding jobs, weak ties (the loose acquaintances with whom you might occasionally exchange an email but don’t know well) could actually be quite powerful. Gratton applies this thinking to the way that networks are formed on the job, and to how people organize to get their work done, get new information, and innovate. She concludes that, especially in an age of remote and hybrid work, companies have to redouble their efforts to ensure that employees are able to establish and mine the power of weak ties. For Gratton, the ability to create such connections is a must-have. ... Now more than ever, people have to engage in the often challenging task of drawing boundaries. 


Most-wanted soft skills for IT pros: CIOs share their recruiting tips

Today’s IT organizations are called upon to drive and deliver significant transformation as technology seeps into all corners of a company and its products and services. With that, new and refined skills are necessary for successful technology leaders to influence business outcomes, innovation, and product development. Empathy, managing ambiguity, and collaborative influence drive innovation and are attributes we look for at MetaBank as we hire and develop top talent. Empathy lies at the core of successful problem-solving – viewing a problem from various angles leads to better solutions. ... Leaders often face challenging circumstances where they must quickly make a tough call with insufficient information. Making good choices in these situations can be critical for an organization’s success. It isn’t always easy to assess this in an interview, but behavioral interview questions and careful follow-up can help elicit specific examples from a candidate’s past work experience that may shed light on their judgment.


6 key steps to develop a data governance strategy

Much of the daily work of data governance occurs close to the data itself. The tasks that emerge from the governance strategy will often be in the hands of engineers, developers and administrators. But in too many organizations, these roles operate in silos separated by departmental or technical boundaries. To develop and apply a governance strategy that can consistently work across boundaries, some top-down influence is required. ... Horror stories of fines for breaching the EU's GDPR law on data privacy and protection might keep business leaders awake at night. This drastic approach may generate some interoffice memos or even unlock some budgetary constraints, but that would be a defensive reaction and possibly create resentment among stakeholders, which is no way to secure long-term good data governance. Instead, try this incremental approach, which should be much more attractive to executives: "Data governance is something we already do, but it's largely informal and we need to put some process around it. In doing so, we will meet regulatory demands, but we will also be a more functional, resilient organization."


8 Master Data Management Best Practices

When software development began embracing agile methodologies, its value to the business skyrocketed. That’s why we believe a MDM best practice is to embrace DataOps. hen software development began embracing agile methodologies, its value to the business skyrocketed. That’s why we believe a MDM best practice is to embrace DataOps. DataOps acknowledges the interconnected nature of data engineering, data integration, data quality, and data security/privacy. It aims to help organizations rapidly deliver data that not only accelerates analytics but also enables analytics that were previously deemed impossible. DataOps provides a myriad of benefits ranging from “faster cycle times” to “fewer defects and errors” to “happier customers.” (source) By adopting DataOps, your organization will have in place the practices, processes, and technologies needed to accelerate the delivery of analytics. You’ll bring rigor to the development and management of data pipelines. And you’ll enable CI/CD across your data ecosystem.


5 tips for building your innovation ecosystem

A common mistake when looking for innovative technology vendors is to look at companies touted as the most innovative or to go with best-of-breed, on the assumption that innovation is baked into their roadmap. It’s likely that neither approach will net you the innovation you’re looking for. Best-of-breed works well for internal IT such as your ERP or CRM, or anything under the covers in terms of client-facing solutions, but when it comes to your value proposition and differentiation you need to look elsewhere. In this case, the best-of-breed tools become the table stakes that you utilize as the foundation for your ecosystem or industry-cloud and your core IP comprises your own IP plus that of those innovative players that you’ve developed unique relationships with. The “most innovative” lists you find on the internet are often based on public or editor opinion and end up surfacing the usual suspects with strong brand awareness. While they may be leading players in the market, this does not guarantee continued innovation. If you do look at the “most innovative” lists, be sure to check the methodology involved and see how it fits your own definition and expectations for what constitutes innovation.


Zen and the Art of Data Maintenance: All Data is Suffering (Part 1)

Data can be used for many types of nefarious activities. For instance, an article in Wired described how a website stored video data regarding child sex abuse acts and how they used this data in threatening, destructive ways leading to all sorts of suffering including suicide attempts.[i] We are often bombarded with social media data (both factual and misinformation) that are designed to hold our attention through emotional disturbances such as fear. These are generally intended to elicit reactions or control behavior regarding many matters including purchasing, voting, mindshare, or almost any other matter. Have you suffered with data? How? Data is the plural form of the Latin word, ‘datum’, which Merriam Webster defines as ‘something given or admitted as a basis for reasoning or inference’. Thus, everything we receive through our senses could be considered data. It could be numbers, text, things we see, hear, or feel. But how could all data be suffering? What about positive data that communicates increased sales, better health, positive comments, data showing helpful contributions, and so on? 


The Metamorphosis of Data Governance: What it Means Today

There’s nothing more galvanizing to an organization’s board of directors—or the C-Level executives who directly answer to it—than stiff monetary penalties for noncompliance to regulations. Zoom reached a settlement for almost $100 million dollars for such issues. Even before this particular example, data governance was inexorably advancing to its current conception as a means of facilitating access control, data privacy, and security. “These are big ticket fines that are coming up,” Ganesan remarked. “Boards are saying we need to have guardrails around our data. Now, what has changed in the last few years is that part of governance, which is security and privacy, is going from being passive to more active.” Such activation not only entails what data governance focuses on, but also what the specific policies it’s comprised of focus on, too. The regulatory, risk mitigation side of data governance is currently being emphasized. It’s no longer adequate to have guidelines or even rules about how data are accessed on paper—top solutions in this space can propel those policies into source systems to ensure adherence when properly implemented. 


Five Steps Every Enterprise Architect Should Take for Better Presentation

Architects invariably care about the material they’re discussing. The mistake is believing or assuming that the audience cares as intently. They may. They may already be familiar with the content. This may simply be a status update on the latest digital transformation project and everyone is knowledgeable about the subject matter. ... Generally speaking, the audience isn’t going to automatically care as much about the material as does the Architect presenting. The key to this step is usually the hardest of all the points made in this article. The key is empathy. Thinking what you would do or what you would be interested in if you were the listener is not empathy. That’s simply you projecting your own headspace onto the audience. Trying to understand how that person is receiving your information is the key. Why do they care, what aspects will they be interested in. To do this requires knowing in advance who you will be speaking to and knowing their background, their education, their professional position, their issues or problems with the subject at hand… knowing, in effect, through what lens they will be viewing your content.



Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it." -- Marian Anderson

Daily Tech Digest - May 21, 2022

How to make the consultant’s edge your own

What actually works, should the organization be led by a braver sort of leadership team, is a change in the culture of management at all levels. The change is that when something bad happens, everyone in the organization, from the board of directors on down, assumes the root cause is systemic, not a person who has screwed up. In the case of my client’s balance sheet fiasco, the root cause turned out to be everyone doing exactly what the situation they faced Right Now required. What had happened was that a badly delayed system implementation, coupled with the strategic decision to freeze the legacy system being replaced, led to a cascade of PTFs (Permanent Temporary Fixes to the uninitiated) to get through month-end closes. The PTFs, being temporary, weren’t tested as thoroughly as production code. But being permanent, they accumulated and sometimes conflicted with one another, requiring more PTFs each month to get everything to process. The result: Month ends did close, nobody had to tell the new system implementation’s executive sponsor about the PTFs and the risks they entailed, and nobody had to acknowledge that freezing the legacy system had turned out to be a bad call.


SBOM Everywhere: The OpenSSF Plan for SBOMs

The SBOM Everywhere working group will focus on ensuring that existing SBOM formats match documented use cases and developing high-quality open source tools to create SBOM documents. Although some of this tooling exists today, more tooling will need to be built. The working group has also been tasked with developing awareness and education campaigns to drive SBOM adoption across open source, government and commercial industry ecosystems. Notably, the U.S. federal government has taken a proactive stance on requiring the use of SBOMs for all software consumed and produced by government agencies. The Executive Order on Improving the Nation’s Cybersecurity cites the increased frequency and sophistication of cyberattacks as a catalyst for the public and private sectors to join forces to better secure software supply chains. Among the mandates is the requirement to use SBOMs to enhance software supply chain security. For government agencies and the commercial software vendors who partner and sell to them, the SBOM-fueled future is already here.


Cybersecurity pros spend hours on issues that should have been prevented

“Security is everyone’s job now, and so disconnects between security and development often cause unnecessary delays and manual work,” said Invicti chief product officer Sonali Shah. “Organizations can ease stressful overwork and related problems for security and DevOps teams by ensuring that security is built into the software development lifecycle, or SDLC, and is not an afterthought,” Shah added. “Application security scanning should be automated both while the software is being developed and once it is in production. By using tools that offer short scan times, accurate findings prioritized by contextualized risk and integrations into development workflows, organizations can shift security left and right while efficiently delivering secure code.” When it comes to software development, innovation and security don’t need to compete, according to Shah. Rather, they’re inherently linked. “When you have a proper security strategy in place, DevOps teams are empowered to build security into the very architecture of application design,” Shah said.


SmartNICs power the cloud, are enterprise datacenters next?

For all the potential SmartNICs have to offer, there remains substantial barriers to overcome. The high price of SmartNICs relative to standard NICs being one of many. Networking vendors have been chasing this kind of I/O offload functionality for years, with things like TCP offload engines, Kerravala said. "That never really caught on and cost was the primary factor there." Another challenge for SmartNIC vendors is the operational complexity associated with managing a fleet of SmartNICs distributed across a datacenter or the edge. "There is a risk here of complexity getting to the point where none of this stuff is really usable," he said, comparing the SmartNIC market to the early days of virtualization. "People were starting to deploy virtual machines like crazy, but then they had so many virtual machines they couldn't manage them," he said. "It wasn't until VMware built vCenter, that companies had one unified control plane for all their virtual machines. We don't really have that on the SmartNIC side." That lack of centralized management could make widespread deployment in environments that don't have the resources commanded by the major hyperscalers a tough sell.


Fantastic Open Source Cybersecurity Tools and Where to Find Them

Organizations benefit greatly when threat intelligence is crowdsourced and shared across the community, said Sanjay Raja, VP of product at Gurucul. "This can provide immediate protection or detection capabilities," he said. “While reducing the dependency on vendors who often do not provide updates to systems, for weeks or even months.” For example, CISA has an Automated Indicator Sharing platform. Meanwhile in Canada, there's the Canadian Cyber Threat Exchange. "These platforms allow for the real-time exchange and consumption of automated, machine-readable feeds," explained Isabelle Hertanto, principal research director in the security and privacy practice at Info-Tech Research Group. This steady stream of indicators of compromise can help security teams respond to network security threats, she told Data Center Knowledge. In fact, the problem isn't the lack of open source threat intelligence data, but an overabundance, she said. To help data center security teams cope, commercial vendors are developing AI-powered solutions to aggregate and process all this information. "We see this capability built into next generation commercial firewalls and new SIEM and SOAR platforms," Hertanto said.


Living better with algorithm

Together with Shah and other collaborators, Cen has worked on a wide range of projects during her time at LIDS, many of which tie directly to her interest in the interactions between humans and computational systems. In one such project, Cen studies options for regulating social media. Her recent work provides a method for translating human-readable regulations into implementable audits. To get a sense of what this means, suppose that regulators require that any public health content — for example, on vaccines — not be vastly different for politically left- and right-leaning users. How should auditors check that a social media platform complies with this regulation? Can a platform be made to comply with the regulation without damaging its bottom line? And how does compliance affect the actual content that users do see? Designing an auditing procedure is difficult in large part because there are so many stakeholders when it comes to social media. Auditors have to inspect the algorithm without accessing sensitive user data. They also have to work around tricky trade secrets, which can prevent them from getting a close look at the very algorithm that they are auditing because these algorithms are legally protected.


CFO perspectives on leading agile change

In an agile organization, leadership-level priorities cascade down to inform every part of the business. For this reason, CFOs talked extensively about the importance of setting up a prioritization framework that is as objective as possible. Many participants mentioned that it can be challenging to work out priorities through the QBR process, because different teams lack an institutional mechanism through which to weigh different work segments against one another and prioritize between them. Most CFOs agreed that some degree of direction from the top is required in this area. One CFO said he thinks of his organization as a “prioritization jar”: leadership puts big stones in the jar first and then fills in the spaces with sand. These prioritization “stones” might be six key projects identified by management, or they might be 20 key initiatives chosen through a mixture of leadership direction and feedback from tribes. A second challenge emerged regarding shifting resources among teams or clusters responsible for individual initiatives. When asked what they would do if they had a magic wand, several CFOs said they need better ways to reallocate resources at short notice. 


Friend Or Foe: Delving Into Edge Computing & Cloud Coputing

One of the most significant features of edge computing is decentralization. Edge computing allows for using resources and communication technologies via a single computing infrastructure and the transmission channel. Edge computing is a technology that optimizes computational needs by utilizing the cloud at its edge. When it comes to gathering data or when someone does a particular action, real-time execution is possible wherever there is a need for that. The two most significant advantages of edge computing are increased performance and lower operational expenses. ... The first thing to realize is that cloud computing and edge computing are not rival technologies. They aren’t different solutions to the same problem; rather, they’re two distinct ways of addressing particular problems. Cloud computing is ideal for scalable applications that must be ramped up or down depending on demand. Extra resources can be requested by web servers, for example, to ensure smooth service without incurring any long-term hardware expenses during periods of heavy server usage. 


Why AI and autonomous response are crucial for cybersecurity

Remote work has become the norm, and outside the office walls, employees are letting down their personal security defenses. Cyber risks introduced by the supply chain via third parties are still a major vulnerability, so organizations need to think about not only their defenses but those of their suppliers to protect their priority assets and information from infiltration and exploitation. And that’s not all. The ongoing Russia-Ukraine conflict has provided more opportunities for attackers, and social engineering attacks have ramped up tenfold and become increasingly sophisticated and targeted. Both play into the fears and uncertainties of the general population. Many security industry experts have warned about future threat actors leveraging AI to launch cyber-attacks, using intelligence to optimize routes and hasten their attacks throughout an organization’s digital infrastructure. “In the modern security climate, organizations must accept that it is highly likely that attackers could breach their perimeter defenses,” says Steve Lorimer, group privacy and information security officer at Hexagon.


Service Meshes Are on the Rise – But Greater Understanding and Experience Are Required

We explored the factors influencing people’s choices by asking which features and capabilities drive their organization’s adoption of service mesh. Security is a top concern, with 79% putting their faith in techniques such as mTLS authentication of servers and clients during transactions to help reduce the risk of a successful attack. Observability came a close second behind security, at 78%. As cloud infrastructure has grown in importance and complexity, we’ve seen a growing interest in observability to understand the health of systems. Observability entails collecting logs, metrics, and traces for analysis. Traffic management came third (62%). This is a key consideration given the complexity of cloud native that a service mesh is expected to help mitigate. ... Potential issues here include latency, lack of bandwidth, security incidents, the heterogeneous composition of the cloud environment, and changes in architecture or topology. Respondents want a service mesh to overcome these networking and in-service communications challenges.



Quote for the day:

"To command is to serve : nothing more and nothing less." -- Andre Marlaux

Daily Tech Digest - May 19, 2022

Five areas where EA matters more than ever

While resiliency has always been a focus of EA, “the focus now is on proactive resiliency” to better anticipate future risks, says Barnett. He recommends expanding EA to map not only a business’ technology assets but all its processes that rely on vendors as well as part-time and contract workers who may become unavailable due to pandemics, sanctions, natural disasters, or other disruptions. Businesses are also looking to use EA to anticipate problems and plan for capabilities such as workload balancing or on-demand computing to respond to surges in demand or system outages, Barnett says. That requires enterprise architects to work more closely with risk management and security staff to understand dependencies among the components in the architecture to better understand the likelihood and severity of disruptions and formulate plans to cope with them. EA can help, for example, by describing which cloud providers share the same network connections, or which shippers rely on the same ports to ensure that a “backup” provider won’t suffer the same outage as a primary provider, he says.


Build or Buy? Developer Productivity vs. Flexibility

To make things a bit more concrete, let’s look at a very simple example that shows the positives of both sides. Developers are the primary audience for InfluxData’s InfluxDB, a time series database. It provides both client libraries and direct access to the database via API to give developers an option that works best for their use case. The client libraries provide best practices out of the box so developers can get started reading and writing data quickly. Things like batching requests, retrying failed requests and handling asynchronous requests are taken care of so the developer doesn’t have to think about them. Using the client libraries makes sense for developers looking to test InfluxDB or to quickly integrate it with their application for storing time series data. On the other hand, developers who need more flexibility and control can choose to interact directly with InfluxDB’s API. Some companies have lengthy processes for adding external dependencies or already have existing internal libraries for handling communication between services, so the client libraries aren’t an option.


Enterprises shore up supply chain resilience with data

“Digital dialogue between trading partners is crucial, not just for those two [direct trading partners], but also for the downstream effects,” he says, adding that when it comes to supply chains and procurement, SAP’s focus is on helping its customers ensure that the data “flows to the right trading partners so that they can make proactive decisions in moving assets, logistics and doing the right purchasing”. He further adds that where supply chain considerations have traditionally been built around “cost, control and compliance”, companies are now looking to incorporate “connectivity, conscience and convenience” alongside those other factors. On the last point regarding convenience, Henrik says this refers to having “information at my fingertips when I need it”, meaning it is important for companies to not only collect data on their operations, but to structure it in a way that drives actionable insights. “Once you have actionable insights from the data, then real change happens, and that’s really what companies are looking for,” he says.


Ransomware is already out of control. AI-powered ransomware could be 'terrifying.'

If attackers were able to automate ransomware using AI and machine learning, that would allow them to go after an even wider range of targets, according to Driver. That could include smaller organizations, or even individuals. "It's not worth their effort if it takes them hours and hours to do it manually. But if they can automate it, absolutely," Driver said. Ultimately, “it's terrifying.” The prediction that AI is coming to cybercrime in a big way is not brand new, but it still has yet to manifest, Hyppönen said. Most likely, that's because the ability to compete with deep-pocketed enterprise tech vendors to bring in the necessary talent has always been a constraint in the past. The huge success of the ransomware gangs in 2021, predominantly Russia-affiliated groups, would appear to have changed that, according to Hyppönen. Chainalysis reports it tracked ransomware payments totaling $602 million in 2021, led by Conti's $182 million. The ransomware group that struck the Colonial Pipeline, DarkSide, earned $82 million last year, and three other groups brought in more than $30 million in that single year, according to Chainalysis.


Will quantum computing ever be available off-the-shelf?

Quantum computing will never exist in a vacuum, and to add value, quantum computing components need to be seamlessly integrated with the rest of the enterprise technology stack. This includes HPC clusters, ETL processes, data warehouses, S3 buckets, security policies, etc. Data will need to be processed by classical computers both before and after it runs through the quantum algorithms. This infrastructure is important: any speedup from quantum computing can easily be offset by mundane problems like disorganized data warehousing and sub-optimal ETL processes. Expecting a quantum algorithm to deliver an advantage with a shoddy classical infrastructure around it is like expecting a flight to save you time when you don’t have a car to take you to and from the airport. These same infrastructure issues often arise in many present-day machine learning (ML) use cases. There may be many off-the-shelf tools available, but any useful ML application will ultimately be unique to the model’s objective and the data used to train it. 


Addressing the skills shortage with an assertive approach to cybersecurity

All too often, businesses do not see investing in security strategy and technologies as a priority – until an attack occurs. It might be the assumption that only the wealthiest industries or those with highly classified information would require the most up-to-date cybersecurity tactics and technology, but this is simply not the case. All organizations need to adopt a proactive approach to security, rather than having to deal with the aftermath of an incident. By doing so, companies and organizations can significantly mitigate any potential damage. Traditionally, security awareness may have been restricted to specific roles, meaning only a select few people having the training and understanding required to deal with cyber-attacks. Nowadays every role, at every level, in all industries must have some knowledge to secure themselves and their work against breaches. Training should be made available for all employees to increase their awareness, and organizations need to prioritize investment in secure, up-to-date technologies to ensure their protection. 


Easily Optimize Deep Learning with 8-Bit Quantization

There are two challenges with quantization: How to do it easily - In the past, it has been a time consuming process; and How to maintain accuracy. Both of these challenges are addressed by the Neural Network Compression Framework (NNCF). NNCF is a suite of advanced algorithms for optimizing machine learning and deep learning models for inference in the Intel® Distribution of OpenVINOTM toolkit. NNCF works with models from PyTorch and TensorFlow. One of the main features of NNCF is 8-bit uniform quantization, using recent academic research to create accurate and fast models. The technique we will be covering in this article is called quantization-aware training (QAT). This method simulates the quantization of weights and activations while the model is being trained, so that operations in the model can be treated as 8-bit operations at inference time. Fine tuning is used to restore the accuracy drop from quantization. QAT has better accuracy and reliability than carrying out quantization after the model has been trained. Unlike other optimization tools, NNCF does not require users to change the model manually or learn how the quantization works.


Apache Druid: A Real-Time Database for Modern Analytics

With its distributed and elastic architecture, Apache Druid prefetches data from a shared data layer into an infinite cluster of data servers. Because there’s no need to move data and you’re providing more flexibility to scale, this kind of architecture performs quicker as opposed to a decoupled query engine such as a cloud data warehouse. Additionally, Apache Druid can process more queries per core by leveraging automatic, multilevel indexing that is built into its data format. This includes a global index, data dictionary and bitmap index, which goes beyond a standard OLAP columnar format and provides faster data crunching by maximizing CPU cycles. ... Apache Druid provides a smarter and more economical choice because of its optimized storage and query engine that decreases CPU usage. “Optimized” is the keyword here; you want your infrastructure to serve more queries in the same amount of time rather than having your database read data it doesn’t need to.


Compete to Communicate on Cybersecurity

At its core, cybersecurity depends on communication. Outdated security policies that are poorly communicated are equally as dangerous as substandard software code and other flawed technical features. Changing human behavior in digital security falls on the technology companies themselves, which need to improve explaining digital security issues to their employees and customers. In turn, tech companies can help employees and customers understand what they can do to make things better and why they need to be active participants in helping to defend themselves, our shared data and digital infrastructure. Instead of competing on the lowest price or claims of best service, how do we incentivize service vendors, cloud providers, device manufacturers and other relevant technology firms to pay more attention to how they communicate with users around security? Rules and regulations? Possibly. Improving how companies communicate and train on security? Absolutely. Shaping a marketplace where tech companies compete more intensively for business on the technical and training elements of security? Definitely.


A philosopher's guide to messy transformations

In the domain of expertise, people base their understanding of transformation on practical insight into the history and culture of the company. A question from an attendee on the panel I conducted illustrated this nicely: “How do you get an organization with a legacy of being extremely risk averse to embrace agility, which can be perceived as a more risky, trial-and-error approach?” The question acknowledges and accepts that the company needs to embrace agility but demonstrates neither insight nor interest as to why it needs to do so. Whether the questioner trusts senior management’s decision to embrace agility, or she has other reasons for ignoring the “why,” it is obvious that she wants to know about the “how.” Too often leaders forget about the how. And that can be a costly mistake. ... “When you have an organization that has been organically growing over 90 years, then the culture is embedded in the language and the behaviors of the people working in the organization,” he said. The strength of legacy companies is that their culture is defined by conversations and behaviors that have been evolving for decades. 



Quote for the day:

"The great leaders are like best conductors. They reach beyond the notes to reach the magic in the players." -- Blaine Lee

Daily Tech Digest - May 18, 2022

Google Cloud launches services to bolster open-source security, simplify zero-trust rollouts

On the zero-trust front, Google is introducing BeyondCorp Enterprise Essentials, which is designed to help enterprise customers begin to deploy zero-trust environments. The new solution brings context-aware access controls for SaaS applications or any other apps connected via Security Assertions Markup Language (SAML), which is an XML-based protocol that supports real-time authentication and authorization across federated Web services environments. It also includes threat and data protection capabilities, such as data loss prevention, malware and phishing protection, and URL filtering, integrated in the Chrome browser, according to Potti. “It’s a simple and effective way to protect your workforce, particularly an extended workforce or users who leverage a ‘bring your own device’ model,” Potti stated. “Admins can also use Chrome dashboards to get visibility into unsafe user activity across unmanaged devices.” BeyondCorp Enterprise includes an app and client connector that can simplify connections to apps running on other clouds such as Azure or AWS without the need to open firewalls or set up site-to-site VPN connections, Potti stated.


Deployment of Low-Latency Solutions in the Cloud

Cloud-native environments offer a common platform and interfaces to ease definition and deployment of complex application architectures. This infrastructure enables the use of mature off-the-shelf components to solve common problems such as leader election, service discovery, observability, health-checks, self-healing, scaling, and configuration management. Typically the pattern has been to run containers atop of virtual machines in these environments; however, now all the main cloud providers offer bare-metal (or near bare-metal) solutions, so even latency-sensitive workloads can be hosted in the cloud. This is the first iteration of a demonstration of how Chronicle products can be used in these architectures and includes solutions to some of the challenges encountered by our clients in cloud and other environments. By leveraging common infrastructure solutions, we can marry the strengths of Chronicle products with the convenience of modern production environments to provide simple low-latency, operationally robust systems.


FBI and NSA say: Stop doing these 10 things that let the hackers in

The joint alert recommends MFA is enforced for everyone, especially since RDP is commonly used to deploy ransomware. "Do not exclude any user, particularly administrators, from an MFA requirement," CISA notes. Incorrectly applied privileges or permissions and errors in access control lists can prevent the enforcement of access control rules and could give unauthorized users or system processes access to objects. Of course, make sure software is up to date. But also don't use vendor-supplied default configurations or default usernames and passwords. These might be 'user friendly' and help the vendor deliver faster troubleshooting, but they're often publicly available 'secrets'. The NSA strongly urges admins to remove vendor-supplied defaults in its network infrastructure security guidance. ... "These default credentials are not secure – they may be physically labeled on the device or even readily available on the internet. Leaving these credentials unchanged creates opportunities for malicious activity, including gaining unauthorized access to information and installing malicious software."


The rise of servant leadership

Though the style originated in the 1970s, servant leadership has gained momentum today as the Great Resignation reveals the pandemic’s mental toll on workers and employees leave their jobs in droves in search of more meaningful work. The pressure to attract and retain talent has never been greater, and companies are moving away from command-and-control style leadership in favor of more purpose-driven management, says David Dotlich, president and senior client partner at Korn Ferry. “We’re seeing this as a big trend across all industries,” Dotlich says. More than half of Korn Ferry’s clients now view purpose as the center of their leadership, he says. “They’re signing up for help” to answer those questions of who do we serve, how do we help, how do we make a difference, how do we change the world, and they’re receiving individual training and tools. ... Servant leaders know how to build trust, provide the tools and support that employees need to grow, remove obstacles, listen more and talk less, and let employees create their own path for success. It can backfire though if employees aren’t dedicated to the team’s core mission.


Four ways to combat the cybersecurity skills gap

Some businesses attempt to narrow the gap by retraining their IT professionals. While there is a chance that some employees with technical skills may be able and willing to take on cybersecurity positions, they still need to have someone to teach them. Most cybersecurity experts today are self-taught and there is very little that an organization can do to help because the availability of security certifications is also limited. However, the real problem is that organizations often perceive cybersecurity as something that only the dedicated cybersecurity workforce should deal with. This perception is the cause of several problems mentioned above, for example, the high level of stress and burnout for cybersecurity staff. Security teams often work alone and the rest of the organization is not aware, not educated, and worst of all: does not feel responsible for security. ... The cybersecurity industry is still a bit behind the trends and a lot of tools are still created with dedicated security specialists in mind. Such tools are difficult or even impossible to use in complex environments, 


Why You Should Care About Software Architecture

Broadly speaking, achieving “sustainability” is the focus on architectural work in software products. A software product can be considered sustainable if it is capable of meeting its current requirements, including QARs, without jeopardizing its ability to meet future requirements. As we stated in the previous section, quality attribute requirements drive the architecture, and meeting key QARs is essential to create sustainable architectural designs. Unfortunately, software systems “wear out” over time, as functional enhancements are being implemented, and new design decisions are made, which may stretch or even break the original architectural design. ... How do you know when your software system is wearing out, the same way you know when your car tires are wearing out and need to be replaced? Just as a physician may use many different kinds of tools to assess the health of an individual, different tools help a team assess software architecture fitness. Older systems may be difficult to understand because, as we mentioned earlier, their design decisions and assumptions are often not documented, and documentation, when it exists, is likely to be outdated.


Open-source standard aims to unify incompatible cloud identity systems

In a press release, Strata Identity stated that current popular cloud platforms use proprietary identity systems with individual policy languages, all of which are incompatible with each other. What’s more, each application must be hard-coded to work with a specific identity system, it added. Hexa has been designed to use IDQL to enable any number of identity systems to work together as a unified whole, without making changes to them or to applications, Strata Identity said. It works by abstracting identity and access policies from cloud platforms, authorization systems, data resources, and zero trust networks to discover what policies exist, then translates them from their native syntax into the generic, IDQL declarative policy, the vendor continued. It then orchestrates identity and access instructions across cloud systems and throughout apps, data resources, platforms, and networks by translating back into native, imperative policies of target systems via a cloud-based architecture.


Vulnerabilities found in Bluetooth Low Energy gives hackers access to numerous devices

This issue is believed to be something that can’t be easily patched over or just an error in Bluetooth specification. This exploit could affect millions of people, as BLE-based proximity authentication was not originally designed for use in critical systems such as locking mechanisms in smart locks, according to NCC Group. “What makes this powerful is not only that we can convince a Bluetooth device that we are near it—even from hundreds of miles away—but that we can do it even when the vendor has taken defensive mitigations like encryption and latency bounding to theoretically protect these communications from attackers at a distance,” said Sultan Qasim Khan, Principal Security Consultant and Researcher at NCC Group. “All it takes is 10 seconds—and these exploits can be repeated endlessly.” To start, the cybersecurity company points out that any product relying on a trusted BLE connection is vulnerable to attacks from anywhere in the world at any given time.


Augmented reality will give us superpowers

Over the next ten years, augmented reality will replace the mobile phone as our primary interface for digital content. Early adopters will embrace the lure of new, magical capabilities. Everyone else, skeptics included, will quickly find themselves at a disadvantage without omniscience, x-ray vision, superhuman recall, and dozens of other capabilities that are not even on the drawing board yet. This will drive adoption as quickly as the transition from flip phones to smartphones. After all, not upgrading your hardware will mean missing out on layers of useful information that everyone else can see. An augmented world is coming — one with the potential to be magical, embellished with artistic content and infused with superhuman abilities. At the same time, there are risks we must avoid, as augmented reality will give tech platforms unprecedented ability to track our activities and mediate our experiences. For these reasons, we need to push for a safe and regulated metaverse, especially the augmented metaverse. It will impact all of our lives in the very near future.


What’s new with ML.NET Automated ML (AutoML) and tooling

Training machine learning models is a time-consuming and iterative task. Automated Machine Learning (AutoML) automates that process by making it easier to find the best algorithm for your scenario and dataset. AutoML is the backend that powers the training experiences in Model Builder and the ML.NET CLI. Last year we announced updates to the AutoML implementation in our Model Builder and ML.NET CLI tools based Neural Network Intelligence (NNI) and Fast and Lightweight AutoML (FLAML) technologies from Microsoft Research. These updates provided a few benefits and improvements over the previous solution which include:Increase in the number of models explored. ... Until recently, you could only take advantage of these AutoML improvements inside of our tools. We’re excited to announce that we’ve integrated the NNI / FLAML implementations of AutoML into the ML.NET framework so you can use them from a code-first experience. To get started today with the AutoML API install the latest pre-release version of the Microsoft.ML and Microsoft.ML.Auto NuGet packages using the ML.NET daily feed.



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - May 17, 2022

Only DevSecOps can save the metaverse

We’ve previously talked about “shifting left,” or DevSecOps, the practice of making security a “first-class citizen” when it comes to software development, baking it in from the start rather than bolting it on in runtime. Log4j, SolarWinds, and other high-profile software supply chain attacks only underscore the importance and urgency of shifting left. The next “big one” is inevitably around the corner. A more optimistic view is that far from highlighting the failings of today’s development security, the metaverse might be yet another reckoning for DevSecOps, accelerating the adoption of automated tools and better security coordination. If so, that would be a huge blessing to make up for all the hard work. As we continue to watch the rise of the metaverse, we believe supply chain security should take center stage and organizations will rally to democratize security testing and scanning, implement software bill of materials (SBOM) requirements, and increasingly leverage DevSecOps solutions to create a full chain of custody for software releases to keep the metaverse running smoothly and securely.


EU Parliament, Council Agree on Cybersecurity Risk Framework

"The revised directive aims to remove divergences in cybersecurity requirements and in implementation of cybersecurity measures in different member states. To achieve this, it sets out minimum rules for a regulatory framework and lays down mechanisms for effective cooperation among relevant authorities in each member state. It updates the list of sectors and activities subject to cybersecurity obligations, and provides for remedies and sanctions to ensure enforcement," according to the Council of the EU. The directive will also establish the European Union Cyber Crises Liaison Organization Network, EU-CyCLONe, which will support the coordinated management of large-scale cybersecurity incidents. The European Commission says that the latest framework is set up to counter Europe's increased exposure to cyberthreats. The NIS2 directive will also cover more sectors that are critical for the economy and society, including providers of public electronic communications services, digital services, waste water and waste management, manufacturing of critical products, postal and courier services and public administration, both at a central and regional level.


Catalysing Cultural Entrepreneurship in India

What constitutes CCIs varies across countries depending on their diverse cultural resources, know-how, and socio-economic contexts. A commonly accepted understanding of CCIs comes from the United Nations Educational, Scientific and Cultural Organization (UNESCO), which defines this sector as “activities whose principal purpose is production or reproduction, promotion, distribution or commercialisation of goods, services, and activities of a cultural, artistic, or heritage-related nature.”, CCIs play an important role in a country’s economy: they offer recreation and well-being, while spurring innovation and economic development at the same time. First, a flourishing cultural economy is a driver of economic growth as attaching commercial value to cultural products, services, and experiences leads to revenue generation. These cultural goods and ideas are also contributors to international trade. Second, although a large workforce in this space is informally organised and often unaccounted for in official labour force statistics, cultural economies are some of the biggest employers of artists, craftspeople, and technicians. 


Rethinking Server-Timing As A Critical Monitoring Tool

Server-Timing is uniquely powerful, because it is the only HTTP Response header that supports setting free-form values for a specific resource and makes them accessible from a JavaScript Browser API separate from the Request/Response references themselves. This allows resource requests, including the HTML document itself, to be enriched with data during its lifecycle, and that information can be inspected for measuring the attributes of that resource! The only other header that’s close to this capability is the HTTP Set-Cookie / Cookie headers. Unlike Cookie headers, Server-Timing is only on the response for a specific resource where Cookies are sent on requests and responses for all resources after they’re set and unexpired. Having this data bound to a single resource response is preferable, as it prevents ephemeral data about all responses from becoming ambiguous and contributes to a growing collection of cookies sent for remaining resources during a page load.


Scalability and elasticity: What you need to take your business to the cloud

At a high level, there are two types of architectures: monolithic and distributed. Monolithic (or layered, modular monolith, pipeline, and microkernel) architectures are not natively built for efficient scalability and elasticity — all the modules are contained within the main body of the application and, as a result, the entire application is deployed as a single whole. There are three types of distributed architectures: event-driven, microservices and space-based. ... For application scaling, adding more instances of the application with load-balancing ends up scaling out the other two portals as well as the patient portal, even though the business doesn’t need that. Most monolithic applications use a monolithic database — one of the most expensive cloud resources. Cloud costs grow exponentially with scale, and this arrangement is expensive, especially regarding maintenance time for development and operations engineers. Another aspect that makes monolithic architectures unsuitable for supporting elasticity and scalability is the mean-time-to-startup (MTTS) — the time a new instance of the application takes to start. 


Proof of Stake and our next experiments in web3

Proof of Stake is a next-generation consensus protocol to secure blockchains. Unlike Proof of Work that relies on miners racing each other with increasingly complex cryptography to mine a block, Proof of Stake secures new transactions to the network through self-interest. Validator's nodes (people who verify new blocks for the chain) are required to put a significant asset up as collateral in a smart contract to prove that they will act in good faith. For instance, for Ethereum that is 32 ETH. Validator nodes that follow the network's rules earn rewards; validators that violate the rules will have portions of their stake taken away. Anyone can operate a validator node as long as they meet the stake requirement. This is key. Proof of Stake networks require lots and lots of validators nodes to validate and attest to new transactions. The more participants there are in the network, the harder it is for bad actors to launch a 51% attack to compromise the security of the blockchain. To add new blocks to the Ethereum chain, once it shifts to Proof of Stake, validators are chosen at random to create new blocks (validate).


Is NLP innovating faster than other domains of AI

There have been several stages in the evolution of the natural language processing field. It started in the 80s with the expert system, moving on to the statistical revolution, to finally the neural revolution. Speaking of the neural revolution, it was enabled by the combination of deep neural architectures, specialised hardware, and a large amount of data. That said, the revolution in the NLP domain was much slower than other fields like computer vision, which benefitted greatly from the emergence of large scale pre-trained models, which, in turn, were enabled by large datasets like ImageNet. Pretrained ImageNet models helped in achieving state-of-the-art results in tasks like object detection, human pose estimation, semantic segmentation, and video recognition. They enabled the application of computer vision to domains where the number of training examples is small, and annotation is expensive. One of the most definitive inventions in recent times was the Transformers. Developed at Google Brains in 2017, Transformers is a novel neural network architecture and is based on the concept of the self-attention mechanism. The model outperformed both recurrent and convolutional models. 

Before you get too excited about Power Query in Excel Online, though, remember one important difference between it and a Power BI report or a paginated report. In a Power BI report or a paginated report, when a user views a report, nothing they do – slicing, dicing, filtering etc – affects or is visible to any other users. With Power Query and Excel Online however you’re always working with a single copy of a document, so when one user refreshes a Power Query query and loads data into a workbook that change affects everyone. As a result, the kind of parameterised reports I show in my SQLBits presentation that work well in desktop Excel (because everyone can have their own copy of a workbook) could never work well in the browser, although I suppose Excel Online’s Sheet View feature offers a partial solution. Of course not all reports need this kind of interactivity and this does make collaboration and commenting on a report much easier; and when you’re collaborating on a report the Show Changes feature makes it easy to see who changed what.


Observability Powered by SQL: Understand Your Systems Like Never Before With OpenTelemetry Traces and PostgreSQL

Given that observability is an analytics problem, it is surprising that the current state of the art in observability tools has turned its back on the most common standard for data analysis broadly used across organizations: SQL. Good old SQL could bring some key advantages: it’s surprisingly powerful, with the ability to perform complex data analysis and support joins; it’s widely known, which reduces the barrier to adoption since almost every developer has used relational databases at some point in their career; it is well-structured and can support metrics, traces, logs, and other types of data (like business data) to remove silos and support correlation; and finally, visualization tools widely support it. ... You're probably thinking that observability data is time-series data that relational databases struggle with once you reach a particular scale. Luckily, PostgreSQL is highly flexible and allows you to extend and improve its capabilities for specific use cases. TimescaleDB builds on that flexibility to add time-series superpowers to the database and scale to millions of data points per second and petabytes of data.


Why cyber security can’t just say “no“

Ultimately, IT security is all about keeping the company safe from damages — financial damages, operational damages, reputational and brand damages. You’re trying to prevent a situation that not only will harm the company’s well-being, but also that of its employees. That is why we need to explain the actual threats and how incidents occur. Explain what steps can be taken to lower the chances and impact of those incidents occurring and show them how they can be part of that. People love learning new things, especially if it has something to do with their daily work. Explain the tradeoffs that are being made, at least in high-level terms. Explain how quickly convenience, such as running a machine as an administrator, can lead to abuse. Not only will the companies appreciate you for your honesty, but they will have the right answer the next time the question comes up. They’ll think along the constraints and find new ways of adding value to the business, while removing factors from their daily work that might result in one less incident down the line.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell

Daily Tech Digest - May 16, 2022

OAuth Security in a Cloud Native World

As you integrate OAuth into your applications and APIs, you will realize that the authorization server you have chosen is a critical part of your architecture that enables solutions for your security use cases. Using up-to-date security standards will keep your applications aligned with security best practices. Many of these standards map to company use cases, some of which are essential in certain industry sectors. APIs must validate JWT access tokens on every request and authorize them based on scopes and claims. This is a mechanism that scales to arbitrarily complex business rules and spans across multiple APIs in your cluster. Similarly, you must be able to implement best practices for web and mobile apps and use multiple authentication factors. The OAuth framework provides you with building blocks rather than an out-of-the-box solution. Extensibility is thus essential for your APIs to deal with identity data correctly. One critical area is the ability to add custom claims from your business data to access tokens. Another is the ability to link accounts reliably so that your APIs never duplicate users if they authenticate in a new way, such as when using a WebAuthn key.


APIs Outside, Events Inside

It goes without saying that external clients of an application calling the same API version — the same endpoint — with the same input parameters expect to see the same response payload over time. The need of end users for such certainty is once again understandable but stands in stark contrast to the requirements of the DA itself. In order for distributed applications to evolve and grow at the speed required in today’s world, those autonomous development teams assigned to each constituent component need to be able to publish often-changing, forward-and-backward-compatible payloads as a single event to the same fixed endpoints using a technique I call "version-stacking." ... A key concern of architects when exposing their applications to external clients via APIs is — quite rightly — security. Those APIs allow external users to affect changes within the application itself, so they must be rigorously protected, requiring many and frequent authorization steps. These security steps have obvious implications for performance, but regardless, they do seem necessary.

 

More money for open source security won’t work

The best guarantor of open source security has always been the open source development process. Even with OpenSSF’s excellent plan, this remains true. The plan, for example, promises to “conduct third-party code reviews of up to 200 of the most critical components.” That’s great! But guess what makes something a “critical component”? That’s right—a security breach that roils the industry. Ditto “establishing a risk assessment dashboard for the top open source components.” If we were good at deciding in advance which open source components are the top ones, we’d have fewer security vulnerabilities because we’d find ways to fund them so that the developers involved could better care for their own security. Of course, often the developers responsible for “top open source components” don’t want a full-time job securing their software. It varies greatly between projects, but the developers involved tend to have very different motivations for their involvement. No one-size-fits-all approach to funding open source development works ...


Prepare for What You Wish For: More CISOs on Boards

Recently, the Security Exchange Commission (SEC) made a welcome move for cybersecurity professionals. In proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting, the SEC outlined requirements for public companies to report any board member’s cybersecurity expertise. The change reflects a growing belief that disclosure of cybersecurity expertise on boards is important as potential investors consider investment opportunities and shareholders elect directors. In other words, the SEC is encouraging U.S. public companies to beef up cybersecurity expertise in the boardroom. Cybersecurity is a business issue, particularly now as the attack surface continues to expand due to digital transformation and remote work, and cyber criminals and nation-state actors capitalize on events, planned or unplanned, for financial gain or to wreak havoc. The world in which public companies operate has changed, yet the makeup of boards doesn’t reflect that.


12 steps to building a top-notch vulnerability management program

With a comprehensive asset inventory in place, Salesforce SVP of information security William MacMillan advocates taking the next step and developing an “obsessive focus on visibility” by “understanding the interconnectedness of your environment, where the data flows and the integrations.” “Even if you’re not mature yet in your journey to be programmatic, start with the visibility piece,” he says. “The most powerful dollar you can spend in cybersecurity is to understand your environment, to know all your things. To me that’s the foundation of your house, and you want to build on that strong foundation.” ... To have a true vulnerability management program, multiple experts say organizations must make someone responsible and accountable for its work and ultimately its successes and failures. “It has to be a named position, someone with a leadership job but separate from the CISO because the CISO doesn’t have the time for tracking KPIs and managing teams,” says Frank Kim, founder of ThinkSec, a security consulting and CISO advisory firm, and a SANS Fellow.


The limits and risks of backup as ransomware protection

One option is to use so-called “immutable” backups. These are backups that, once written, cannot be changed. Backup and recovery suppliers are building immutable backups into their technology, often targeting it specifically as a way to counter ransomware. The most common method for creating immutable backups is through snapshots. In some respects, a snapshot is always immutable. However, suppliers are taking additional measures to prevent these backups being targeted by ransomware. Typically, this is by ensuring the backup can only be written to, mounted or erased by the software that created it. Some suppliers go further, such as requiring two people to use a PIN to authorise overwriting a backup. The issue with snapshots is the volume of data they create, and the fact that those snapshots are often written to tier one storage, for reasons of rapidity and to lessen disruption. This makes snapshots expensive, especially if organisations need to keep days, or even weeks, of backups as a protection against ransomware. “The issue with snapshot recovery is it will create a lot of additional data,” says Databarracks’ Mote.


Four ways towards automation project management success

Having a fundamental understanding of the relationship between problem and outcome is essential for automation success. Process mining is one of the best options a business has to expedite this process. Leyla Delic, former CIDO at Coca Cola İçecek, eloquently describes process mining as a “CT scan of your processes”, taking stock and ensuring that the automation that you want to implement is actually problem-solving for the business. With process mining one should expect to need to go in and try blindly at first, learn what works, and only then expand and scale for real outcomes. A recent Forrester report found that 61% of executive decision-makers either are, or are looking at, using process mining to simplify their operations. Constructing a detailed, end-to-end understanding of processes provides the necessary basis to move from siloed, specific task automation to more holistic process automation – making a tangible impact. With the most advanced tools available today, one can even understand in real-time the actual activities and processes of knowledge workers across teams and tools, and receive automatic recommendations on how to improve work.


The Power of Decision Intelligence: Strategies for Success

While chief information officers and chief data officers are the traditional stakeholders and purchase decision makers, Kohl notes that he’s seeing increased collaboration between IT and other business management areas when it comes to defining analytics requirements. “Increasingly, line-of-business executives are advocating for analytics platforms that enable data-driven decision making,” he says. With an intelligent decisioning strategy, organizations can also use customer data -- preferably in real time -- to understand exactly where they are on their journeys -- be it an offer for a more tailored new service, or outreach with help if they’re behind on a payment. Don Schuerman, CTO of Pega, says this helps ensure that every interaction is helpful and empathetic, versus just a blind email sent without any context. In the same way that a good intelligence integration strategy can benefit customers, the ability to analyze employee data and understand roadblocks in their workflows helps solve for these problems faster and create better processes, resulting in happier, more productive employees.


Digital exhaustion: Redefining work-life balance

As workers continue to create and collaborate in digital spaces, one of the best things we can do as leaders is to let go. Let go of preconceived schedules, of always knowing what someone is working on, of dictating when and how a project should be accomplished – in effect, let go of micromanagement. Instead, focus on hiring productive, competent workers and trust them to do their jobs. Don’t manage tasks – gauge results. Use benchmarks and deadlines to assess effectiveness and success. This will make workers feel more empowered and trusted. Such “human-centric” design, as Gartner explains, emphasizes flexible work schedules, intentional collaboration, and empathy-based management to create a sustainable environment for hybrid work. According to Gartner’s evaluation, a human-centric approach to work stimulates a 28 percent rise in overall employee performance and a 44% decrease in employee fatigue. The data supports the importance of recognizing and reducing the impacts of digital exhaustion.


Late-Stage Startups Feel the Squeeze on Funding, Valuations

Investors are now tracking not only a prospect's burn rate but also their burn multiple, which Sekhar says measures how much cash a startup is spending relative to the amount of ARR it is adding each year. As a result, he says, deals that last year took two days to get done are this year taking two weeks since investors are engaging in far more due diligence to ensure they're betting on a quality asset. "We've seen this in the past where companies spend irresponsibly and just run off a cliff expecting that they'll raise yet another round," Sekhar says. "I think we're going back to basics and focusing on building great businesses." Midstage and late-stage security startups have begun examining how many months of capital they have and whether they should slow hiring to buy more time to prove their value, Scheinman says. Startups want to extend how long they can operate before they have to approach investors for more money, given all the uncertainty in the market, he says. As a result, Scheinman says, venture-backed firms have cut back on hiring and technology purchases and placed greater emphasis on hitting their sales numbers. 



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein