Daily Tech Digest - December 20, 2021

Top 5 Internet Technologies of 2021

Speaking of React, 2021 didn’t see any diminishment of the popular Facebook-derived JavaScript library. Although React-based frameworks abound, one in particular stood out this year: Next.js, the open source framework managed by Vercel. At the end of October, Vercel announced version 12 of Next.js, which included ES modules and URL imports, instant Hot Module Replacement (HMR), and something called “Middleware” that enables you to “run code before a request is completed.” Next.js is indicative of the rise of SSGs (Static Site Generators) over the past few years, with Gatsby and Hugo other examples. Although, there has been a noticeable move away from pure static generation — Next.js now describes itself as a “hybrid static [and] server rendering” framework. Next.js developers love its ease-of-use and all the fancy features (like “edge functions”), however not everyone is enamored with the output of Next.js-made apps. That’s perhaps more of an indictment of React itself, than Next.js. But it is worth noting that there is increasing pushback against React frameworks on the web, due to the amount of JavaScript they tend to use.


Why it’s time to rethink your cyber talent and retention strategy

Organisations that don’t invest in cyber skills training and development programmes for technical personnel and the wider workforce risk throttling their future internal talent marketplace. Today’s increasingly digital workplace means cyber security is everyone’s business. By extending cyber awareness and training to all employees, organisations will be able to mobilise those individuals that demonstrate aptitude and interest to build up their skills set and acquire industry-recognised certifications that will help the organisation expand and strengthen its cyber security teams. Alongside initiating a mentorship programme to support people make a ‘job shift’ into cyber security roles, organisations should look to facilitate defined cyber security career pathways. ... Many IT leaders are already active members of knowledge networks and communities, that present a rich seam of opportunity when it comes to virtually meeting and evaluating potential candidates who are an exact match for their business, in a highly targeted way.


On the Importance of Bayesian Thinking in Everyday Life

Surprisingly, there is no consensus as to what probability really means. In general, there are two ways to think about it. One is to define probability as the observed frequency of events in many trials. For instance, if one would toss a coin many times, approximately half of the outcomes will be heads, and the other half will be tails. The more tosses, the closer the observed frequencies will be to 50–50. Hence, we say that the probability of tossing heads (or tails) is 50%, or 0.5. This is the so-called frequentist probability. There is also another way to think about it, known as subjective or Bayesian probability. In a nutshell, this definition states that a person’s subjective belief about how likely something is to happen is also a probability. I might say: I think there is a 50% chance it will rain tomorrow. It is a valid statement of a Bayesian probability, but not of a frequentist one. ... Whichever definition of probability we adopt (and we will see both in action shortly), probability always follows certain rules. It is a number between 0 and 1 that expresses how certain something is to happen. 


The Future of Work is Not Corporate - It’s DAOs and Crypto Networks

As companies grow, they are no longer able to maintain a sustainable relationship with these orbital network participants. The relationship between the company and the participants turns zero-sum, and in order to maximize profits, the company begins to extract value from these participants. ... The model of a company having strict boundaries between internal and external may have made sense in the Industrial Age, but in the Information Age, this model leads to misaligned incentives and unsustainable extraction. In our world of complex information and orbital stakeholders, companies are no longer suited to help us coordinate our activity. Crypto networks create better alignment between participants, and DAOs will be the coordination layer for this new world. ... DAOs will eventually replace the traditional model. A DAO is an internet-native organization with core functions that are automated by smart contracts, and with people who do the things that automation cannot. In practice, not all DAOs are decentralized or autonomous, so it is best to think of DAOs as internet-based organizations that are collectively owned and controlled by its members.


The future is not the Internet of Things… it is the Connected Intelligent Edge

It is not surprising that Qualcomm is talking about it. At its recent Investor Day presentation, Amon shared how the company is uniquely positioned to drive the Connected Intelligent Edge: “We are working to enable a world where everyone and everything is intelligently connected. Our mobile heritage and DNA puts us in an incredible position to provide high-performance, low-power computing, on-device intelligence, all wireless technologies, and leadership across not only AI processing and connectivity but camera, graphics, and sensors. These technologies will scale to support every single device at the edge, from earbuds all the way to connected intelligent vehicles.” For Qualcomm, Amon sees this as an opportunity to engage a $700 billion addressable market in the next decade. Amon is not alone. ... “Qualcomm is a leader at the Intelligent Edge, driving advances in efficient computing, wireless connectivity and on-device AI. And your vision for a future of technology where everyone and everything is intelligently connected is aligned with our own,” Nadella said.


Measure Outcomes, Not Outputs: Software Development in Today’s Remote Work World

Lower productivity does not always mean that the developer lacks skills and is therefore inefficient. Comparing how much code was written to how much was moved into production provides some key insights. The first insight is whether or not the developer was working on features that are important to the business. Suppose the development team wrote a lot of code, but only a small amount made it to production. In such a scenario, it could mean they weren’t working on the right features because someone misunderstood the business priorities or spent a lot of time on prototyping. Secondly, it is possible that the product owner did not fully define the requirement and kept on changing it, resulting in code churn. Code churn measures the amount of code that was re-written for a feature to be done right. Code churn can happen because of a) inexperienced developers writing bad code, b) the developer’s poor understanding of the product requirements, or c) the product owner not defining the feature well leading to scope changes, or d) the prioritization of features not done right by the product owner.


Lights Out: Cyberattacks Shut Down Building Automation Systems

The firm, located in Germany, discovered that three-quarters of the BAS devices in the office building system network had been mysteriously purged of their "smarts" and locked down with the system's own digital security key, which was now under the attackers' control. The firm had to revert to manually flipping on and off the central circuit breakers in order to power on the lights in the building. The BAS devices, which control and operate lighting and other functions in the office building, were basically bricked by the attackers. "Everything was removed ... completely wiped, with no additional functionality" for the BAS operations in the building, explains Thomas Brandstetter, co-founder and general manager of Limes Security, whose industrial control system security firm was contacted in October by the engineering firm in the wake of the attack. Brandstetter's team, led by security experts Peter Panholzer and Felix Eberstaller, ultimately retrieved the hijacked BCU (bus coupling unit) key from memory in one of the victim's bricked devices, but it took some creative hacking.


What Log4Shell teaches us about open source security

Nearly every organization now uses some amount of open source, thanks to benefits such as lower cost compared with proprietary software and flexibility in a world increasingly dominated by cloud computing. Open source isn’t going away anytime soon — just the opposite — and hackers know this. As for what Log4Shell says about open-source security, I think it raises more questions than it answers. I generally agree that open-source software has security advantages because of the many watchful eyes behind it — all those contributors worldwide who are committed to a program’s quality and security. But a few questions are fair to ask: Who is minding the gates when it comes to securing foundational programs like Log4j? The Apache Foundation says it has more than 8,000 committers collaborating on 350 projects and initiatives, but how many are engaged to keep an eye on an older, perhaps “boring” one such as Log4j? Should large deep-pocketed companies besides Google, which always seems to be heavily involved in such matters, be doing more to support the cause with people and resources?


AI Comes Alive in Industrial Automation

AI and ML tools are getting used to predict future energy consumption patterns in manufacturing. This mitigates soaring energy costs and also helps offset climate change. AI also helps to sort out chaotic systems such as renewables. “Training these AI models is burning tons of energy. That’s not false. It does take energy,” said Nicholson. “But what people are missing is that AI models are designed to help companies with enormous physical systems operate more efficiently.” While AI takes up a lot of processing energy, the results in efficiency savings can far outweigh the expense in energy consumption. “AI can help us make more with less. We can cut down on waste with optimization. We can get growth without consuming more,” said Nicholson. “We can train an optimization model in 20 minutes to save a company tens of millions of dollars of energy consumption per year. The advantages can be huge. That’s already happening.” AI can help plant managers figure out what equipment is best for what task at what time. These are issues that are not easily soloved without computer analysis.


Backdoor Discovered in US Federal Agency Network

Avast's suspicion of network interception and exfiltration is based on its analysis of two files the researchers obtained. The company did not provide ISMG with the origin of the files. One of the files, through which the threat actor initiates the backdoor, is termed as a "downloader" by Avast. It masquerades as a legitimate Windows file named oci[.]dll and abuses the WinDivert, a legitimate packet-capturing utility that can be used to implement user-made packet filters, packet sniffers, firewalls, NAT, VPNs, tunneling applications, etc., without the need to write kernel-mode code. This allows the attacker to listen to all internet communication via the victim's network, they say. "We found this first file disguised as oci.dll ('C:WindowsSystem32oci.dll') - or Oracle Call Interface. It contains a compressed library [called NTlib]. This oci.dll exports only one function 'DllRegisterService.' This function checks the MD5 of the hostname and stops if it doesn’t match the one it stores.



Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward

Daily Tech Digest - December 19, 2021

Data Science Collides with Traditional Math in the Golden State

San Francisco’s approach is the model for a new math framework proposed by the California Department of Education that has been adopted for K-12 education statewide. Like the San Francisco model, the state framework seeks to alter the traditional pathway that has guided college-bound students for generations, including by encouraging middle schools to drop Algebra (the decision to implement the recommendations is made by individual school districts). This new framework has been received with some controversy. Yesterday, a group of university professors wrote an open letter on K-12 mathematics, which specifically cites the new California Mathematics Framework. “We fully agree that mathematics education ‘should not be a gatekeeper but a launchpad,’” the professors write. “However, we are deeply concerned about the unintended consequences of recent well-intentioned approaches to reform mathematics, particularly the California Mathematics Framework.” Frameworks like the CMF aim to “reduce achievement gaps by limiting the availability of advanced mathematical courses to middle schoolers and beginning high schoolers,” the professors continued.


Promoting trust in data through multistakeholder data governance

A lack of transparency and openness of the proceedings, or barriers to participation, such as prohibitive membership fees, will impede participation and reduce trust in the process. These challenges are particularly felt by participants from low- and middle-income countries (LICs and LMICs), whose financial resources and technical capacity are usually not on par with those of higher-income countries. These challenges affect both the participatory nature of the process itself and the inclusiveness and quality of the outcome. Even where a level playing field exists, the effectiveness of the process can be limited if decision makers do not incorporate input from other stakeholders. Notwithstanding the challenges, multistakeholder data governance is an essential component of the “trust framework” that strengthens the social contract for data. In practice, this will require supporting the development of diverse forums—formal or informal, digital or analog—to foster engagement on key data governance policies, rules, and standards, and the allocation of funds and technical assistance by governments and nongovernmental actors to support the effective participation of LMICs and underrepresented groups.


A Plan for Developing a Working Data Strategy Scorecard

Strategy is an evolving process, with regular adjustments expected as progress is measured against desired goals over longer timeframes. “There’s always an element of uncertainty about the future,” Levy said, “so strategy is more about a set of options or strategic choices, rather than a fixed plan.” It’s common for companies to re-evaluate and adjust accordingly as business goals evolve and systems or tools change. Before building a strategy, people often assume that they must have vision statements or mission statements, a SWOT analysis, or goals and objectives. These are good to have, he said, but in most instances, they are only available after the strategy analysis is completed. “When people establish their Data Strategies, it’s typically to address limitations they have and the goals that they want. Your strategy, once established, should be able to answer these questions.” But again, Levy said, it’s after the strategy is developed, not prior. Although it can be difficult to understand the purpose of a Data Strategy, he said, it’s critically important to clearly identify goals and know how to communicate them to the intended audience.


“Less popular” JavaScript Design Patterns.

As software engineers, we strive to write maintainable, reusable, and eloquent code that might live forever in large applications. The code we create must solve real problems. We are certainly not trying to create redundant, unnecessary, or “just for fun” code. At the same time, we frequently face problems that already have well-known solutions that have been defined and discussed by the Global community or even by our own teams millions of times. Those solutions to such problems are called “Design patterns”. There are a number of existing design patterns in software design, some of them are used more often, some of them less frequently. Examples of popular JavaScript design patterns include factory, singleton, strategy, decorator, and observer patterns. In this article, we’re not going to cover all of the design patterns in JavaScript. Instead, let’s consider some of the less well-known but potentially useful JS patterns such as command, builder, and special case, as well as real examples from our production experience.


Software Engineering | Coupling and Cohesion

The purpose of Design phase in the Software Development Life Cycle is to produce a solution to a problem given in the SRS(Software Requirement Specification) document. The output of the design phase is Software Design Document (SDD). Basically, design is a two-part iterative process. First part is Conceptual Design that tells the customer what the system will do. Second is Technical Design that allows the system builders to understand the actual hardware and software needed to solve customer’s problem. ... If the dependency between the modules is based on the fact that they communicate by passing only data, then the modules are said to be data coupled. In data coupling, the components are independent of each other and communicate through data. Module communications don’t contain tramp data. Example-customer billing system. In stamp coupling, the complete data structure is passed from one module to another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors- this choice was made by the insightful designer, not a lazy programmer.


5 Takeaways from SmartBear’s State of Software Quality Report

As API adoption and growth continues, standardization (52%) continues to rank as the top challenge organizations hope to solve soon as they look to scale. Without standardization, APIs become bespoke and developer productivity declines. Costs and time-to-market increase to accommodate changes, the general quality of the consumer experience wanes, and it leads to a lower value proposition and decreased reach. Additionally, the consumer persona in the API landscape is rightfully getting more attention. Consumer expectations have never been higher. API consumers demand standardized offerings from providers and will look elsewhere if expectations around developer experience isn’t met, which is especially true in financial services. Security (40%) has thankfully crept up in the rankings to number two this year. APIs increasingly connect our most sensitive data, so ensuring your APIs are secure before, during, and after production is imperative. Applying thoughtful standardization and governance guiderails are required for teams to deliver good quality and secure APIs consistently.


From DeFi year to decade: Is mass adoption here? Experts Answer, Part 1

More scaling solutions will become essential to the mass adoption of DeFi products and services. We are seeing that most DeFi applications go live on multiple chains. While that makes them cheaper to use, it adds more complexities for those who are trying to learn and understand how they work. Thus, to start the second phase of DeFi mass adoption, we need solutions that simplify onboarding and use DApps that are spread across different chains and scaling solutions. The endgame is that all the cross-chain actions will be in the background, handled by infra services such as Biconomy or the DApp themselves, so the user doesn’t need to deal with it themselves. ... Going into 2022 and equipped with the right layer-one networks, we’re aiming for mass adoption. To achieve that, we need to eradicate the entry barriers for buying and selling crypto through regulated fiat bridges (such as banks), overhaul the user experience, reduce fees, and provide the right guide rails so everyone can easily and safely participate in the decentralized economy. DeFi is legitimizing crypto and decentralized economies. Traditional financial institutions are already starting to participate. In 2022, we will only see an uptick in usage and adoption.


Serious Security: OpenSSL fixes “error conflation” bugs – how mixing up mistakes can lead to trouble

The good news is that the OpenSSL 1.1.1m release notes don’t list any CVE-numbered bugs, suggesting that although this update is both desirable and important, you probably don’t need to consider it critical just yet. But those of you who have already moved forwards to OpenSSL 3 – and, like your tax return, it’s ultimately inevitable, and somehow a lot easier if you start sooner – should note that OpenSSL 3.0.1 patches a security risk dubbed CVE-2021-4044. ... In theory, a precisely written application ought not to be dangerously vulnerable to this bug, which is caused by what we referred to in the headline as error conflation, which is really just a fancy way of saying, “We gave you the wrong result.” Simply put, some internal errors in OpenSSL – a genuine but unlikely error, for example, such as running out of memory, or a flaw elsewhere in OpenSSL that provokes an error where there wasn’t one – don’t get reported correctly. Instead of percolating back to your application precisely, these errors get “remapped” as they are passed back up the call chain in OpenSSL, where they ultimately show up as a completely different sort of error.


Digital Asset Management – what is it, and why does my organisation need it?

DAM technology is more than a repository, of course. Picture it as a framework that holds a company’s assets, on top of which sits a powerful AI engine capable of learning the connections between disparate data sets and presenting them to users in ways that make the data more useful and functional. Advanced DAM platforms can scale up to storing more than ten billion objects – all of which become tangible assets, connected by the in-built AI -- at the same time. This has the capacity to result in a huge rise in efficiency around the use of assets and objects. Take, for example, a busy modern media marketing agency. In the digital world, they are faced with a massive expansion of content at the same time as release windows are shrinking – coupled with the issue of increasingly complex content creation and delivery ecosystems. A DAM platform can manage those huge volumes of assets - each with their complex metadata - at speeds and scale that would simply break a legacy system. Another compelling example of DAM in action includes a large U.S.-based film and TV company, which uses it for licencing management.


Impact of Data Quality on Big Data Management

A starting point for measuring Data Quality can be the qualities of big data—volume, velocity, variety, veracity—supplemented with a fifth criterion of value, made up the baseline performance benchmarks. Interestingly, these baseline benchmarks actually contribute to the complexity of big data: variety such as structured, unstructured, or semi-structured increases the possibility of poor data; data channels such as streaming devices with high-volume and high-velocity data enhances the chances of corrupt data—and thus no single quality metric can work on such voluminous and multi-type data. The easy availability of data today is both a boon and a barrier to Enterprise Data Management. On one hand, big data promises advanced analytics with actionable outcomes; on the other hand, data integrity and security are seriously threatened. The Data Quality program is an important step in implementing a practical DG framework as this single factor controls the outcomes of business analytics and decision-making. ... Another primary challenge that big data brings to Data Quality Management is ensuring data accuracy, without which, insights would be inaccurate. 



Quote for the day:

"There is no "one" way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer

Daily Tech Digest - December 18, 2021

10 Key AI & Data Analytics Trends for 2022 and Beyond

Whilst most research is understandably focused on pushing the boundaries of complexity, the reality is that training and running complex models can have a big impact on the environment. It’s predicted that data centres will represent 15% of global CO2 emissions by 2040, and a 2019 research paper, “Energy considerations for Deep Learning,” found that training a natural language translation model emitted CO2 levels equivalent to four family cars over their lifetime. Clearly, the more training, the more CO2 is released. With a greater understanding of environmental impact, organisations are exploring ways to reduce their carbon footprint. Whilst we can now use AI to make data centres more efficient, the world should expect to see more interest in simple models that perform as well as complex ones for solving specific problems. Realistically, why should we use a 10-layer convolutional neural network when a simple bayesian model performs equally well while using significantly less data, training, and compute power? “Model efficiency” will become a byword for environmental AI, as creators focus on building simple, efficient, and usable models that don't cost the earth.


“Digital Twin” with Python: A hands-on example

IBM defines a digital twin as follows “A digital twin is a virtual model designed to accurately reflect a physical object”. They go on to describe how the main enabling factors for creating a digital twin are the sensors that gather data and the processing system that inserts the data in some particular format/model into the digital copy of the object. Further, IBM says “Once informed with such data, the virtual model can be used to run simulations, study performance issues and generate possible improvements”. ... So, how do we use our favorite language Python to create a digital twin? Why do we even think it will work? The answer is deceptively simple. Just look at the figure above and then at the one below to see the equivalency between a Digital Twin model and a classic Python object. We can emulate the sensors and data processors with suitable methods/functions, store the gathered data in a database or internal variables, and encapsulate everything into a Python class.


Patterns for Authorization in Microservices

When you have a monolith, you generally only need to talk to one database to decide whether a user is allowed to do something. An authorization policy in a monolith doesn't need to concern itself too much with where to find the data (such as user roles) — you can assume all of it is available, and if any more data needs to be loaded, it can be easily pulled in from the monolith's database. But the problem gets harder with distributed architectures. Perhaps you're splitting your monolith into microservices, or you're developing a new compute-heavy service that needs to check user permissions before it runs jobs. Now, the data that determines who can do what might not be so easy to come by. You need new APIs so that your services can talk to each other about permissions: "Who's an admin on this organization? Who can edit this document? Which documents can they edit?" To make a decision in service A, we need data from service B. How does a developer of service A ask for that data? How does a developer of service B make that data available?


How the use of AI and advanced technology is revolutionizing healthcare

In the payments realm, Mastercard® Healthcare Solutions optimizes the workflow for payers and providers by automating repetitive and error-prone operations, such as billing and claims processing. According to CIO magazine, many hospitals are now using AI to automate mundane tasks, reduce workloads, eliminate errors and speed up the revenue cycle. The author notes AI’s effectiveness for reducing incorrect payments for erroneous billings, and for preventing the labor-intensive process of pulling files, resubmitting to payers and eventual payment negotiations. ... The successful use of AI for FWA prevention is increasing in popularity. A recent study by PMYNTS revealed that approximately 12 percent of the 100 sector executives surveyed use AI in healthcare payments, three times the number using AI in 2019. Nearly three-quarters of the 100 execs plan to implement AI by 2023. ... These are all important factors when building an AI model and show the need to demonstrate return on investment (ROI) through a proof of concept.


IBM Brings AI to Monitor Petabytes of Network Traffic

“As we surround applications with our capabilities, we will understand the traffic flow and the performance and what’s normal,” Coward says. “The longer you run the AI within the network, the more you know about what typically happens on a Tuesday afternoon in Seattle.” A key aspect of SevOne is the ability to take raw network performance data from sources–such as SNMP traps, logs in Syslog formats, and even packets captured from network taps–combine it in a database, and then generate actionable insights from that blended data. “The uniqueness of SevOne is really that we put it into a time-series database. So we understand for all those different events, how are they captured [and] we can correlate them,” Coward explains “That sounds like an extraordinary simple things to do. When you’re trying to do that at scale across a wide network where you literally have petabytes of data being created, it creates its own challenge.” The insights generated from SevOne can take the form of dashboards that anyone can view to see if there’s a network problem, thereby eliminating the need to call IT.


Ethics in Tech: 5 Actions to Lead, Sustain Growth

The rapid deployment of AI into societal decision-making—in areas such as health care recommendations, hiring decisions, and autonomous driving—has catalyzed ongoing ethics discussions regarding trustworthy AI. These considerations are in early stages. Future issues could arise as tech goes beyond AI. Focus is intensifying on the importance of deploying AI-powered systems that benefit society without sparking unintended consequences with respect to bias, fairness, or transparency. Technology is increasingly a focal point in discussions about efforts to deceive using disinformation, misinformation, deepfakes, and other misuses of data to attack or manipulate people. Some tech companies are asking governments to pass regulations clearly outlining responsibilities and standards, and many organizations are cooperating with law enforcement and intelligence agencies to promote vigilance and action. ... Many technology organizations are facing demands from stakeholders to do more than required by law to adopt sustainable measures such as promoting more efficient energy use and supply chains, reducing manufacturing waste, and decreasing water use in semiconductor fabrication.


The 5 Characteristics of a Successful Data Scientist

Everything is connected in some way, well beyond the obvious, which leads to layer upon layer of real world complexity. Complex systems interact with other complex systems to produce additional complex systems of their own, and so goes the universe. This game of complexity goes beyond just recognizing the big picture: where does this big picture fit into the bigger picture, and so on? But this isn't just philosophical. This real world infinite web of complexity is recognized by data scientists. They are interested in knowing as much about relevant interactions, latent or otherwise, as they work through their problems. They look for situation-dependent known knowns, known unknowns, and unknown unknowns, understanding that any given change could have unintended consequences elsewhere. It is the data scientist's job to know as much about their relevant systems as possible, and leverage their curiosity and predictive analytical mindset to account for as much of these systems' operations and interactions as feasible, in order to keep them running smoothly even when being tweaked. 


PYMNTS DeFi Series: Unpacking DeFi and DAO

Like any public blockchain, the open-source code is viewable by the public. Since there is no human being in control, users can be certain the code will execute according to the rules it contains. As the industry saying goes, “code is law.” DAOs are controlled by a type of cryptocurrency called governance tokens, and these give token holders a vote on the project. The investment is based on the idea that as the platform attracts more users and the funds are deposited into its lending pools, the total value locked (TVL) increases and the more valuable its tokens will become. Aave has nearly $14 billion TVL, but the AAVE token is not loaned out. The Aave protocol’s voters have allowed lenders to lock 30 different cryptocurrencies, each of which has interest rates for lenders and borrowers set by the smart contract rules. Different protocols have different voting rules, but almost all come down to this: Token holders can propose a rule change. If it gets enough support, a vote is scheduled; if enough voters support it, the proposal passes, the code is updated, and the protocol’s rules are updated.


How Does Blockchain Help with Digital Identity?

It is well understood that blockchain-based digital identity management is robust and encrypted to ensure security and ease of portability. Hence, mandating its effective incorporation for improving the socio-economic well-being of the users, which is mainly associated with digital identity. With time and advanced technologies, digital identity has become an essential entity that enables users to have various rights and privileges. Although Blockchain has various benefits while managing digital identities, it cannot be considered a panacea. Blockchain technology is continuously developing, and though it offers multiple benefits, there also exist various challenges when aiming to completely replace the traditional identity management methods with the latter. Some of the known challenges include the constantly developing technology and the lack of standardization of data exchange. Considering the benefits that come with transparency and the trust earned through blockchain frameworks, numerous organizations are merging to ensure interoperability across their borders.


Why 2022 Will Be About Databases, Data Mesh, and Open Source Communities

Data lakes will continue their dominance as essential for enabling analytics and data visibility; 2022 will see rapid expansion of a thriving ecosystem around data lakes, driven by enterprises seeking greater data integration. As organizations work out how to introduce data from third-party systems and real-time transactional production workloads into their data lakes, technologies such as Apache Kafka and Pulsar will take on those workloads and grow in adoption. Beyond introducing data to enable BI reporting and analytics, technologies such as Debezium and Kafka Connect will also enable data lake connectivity, powering services that require active data awareness. Expect that approaches leveraging an enterprise message bus will become increasingly common as well. Organizations in a position to benefit from the rise of integration solutions should certainly move on these opportunities in 2022. Related to this trend (and to Trend #1 as well): the emerging concept of a data mesh will really come into its own in 2022. 



Quote for the day:

"The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things." -- Ronald Reagan

Daily Tech Digest - December 17, 2021

2022: Supply-Chain Chronic Pain & SaaS Security Meltdowns

With the rise of SaaS adoption, we have witnessed the parallel development of a “business application mesh,” which enables organizations to build custom business logic across multiple, disparate SaaS applications. This mesh also enables transitive trust relationships to be created that enable data to move among these SaaS applications without a central authority that has visibility into or governs the movement of this data. In the past, our IT architecture enabled the enterprise to have a view of how users were interacting with multiple different applications, while remaining at the center of the interactions. But with the business application mesh in place, SaaS applications are connected to each other directly without the enterprise being at the center. GitHub is now automated to interact with Slack on behalf of my organization, for instance. Jira is connected directly with Salesforce. Hubspot sends data to a myriad of other SaaS applications. The growing network of integrations enable automated business workflows and data exchange. 


5 Leadership Trends to Embrace Now to Prepare for 2022

Leaders of the future are paying attention. As we head into 2022, we must create cultures where employee well-being comes first. Change like this starts at the top, and leaders must set an example. Every person on a company’s executive team must be committed to workplace well-being, modeling a holistic lifestyle where top priorities are physical, emotional, mental and spiritual health. The days of work, work and more work are over. People are craving more balance and wellness in life, and leaders who ignore or resist addressing it will be left behind. Second, leaders must build a supportive environment that focuses on the whole person, not just the working portion. A supportive environment offers resources for depression and other mental health issues and incentives for exercise and healthy eating behaviors. Companies must offer EAP services that address mental health and financial, spiritual and social well-being. Creating a supportive environment requires an investment in training. Training on how to create psychological safety where employees feel safe to talk about their well-being.


Optimize your system design using Architecture Framework Principles

When it comes to system design, simplicity is key. If your architecture is too complex to understand, your developers and operations teams can face complications during implementation or ongoing management. Wherever possible, we highly recommend using fully managed services to minimize the risk of managing and maintaining baseline systems, as well as the time and effort required by your teams. If you’re already running your workloads in production, testing managed service offerings can help simplify operational complexities. If you’re starting new, start simple, establish an MVP, and resist the urge to over-engineer. You can identify corner use cases, iterate, and improve your systems incrementally over time. Decoupling is a technique used to separate your applications and service components - such as a monolithic application stack - into smaller components that can operate independently. A decoupled architecture therefore, can run its function(s) independently, irrespective of its various dependencies.


Engineering Manager: Do Not Be a Hero

Our day by day is to support the people and the teams. The teams need to improve their skills, to provide more quality software and many other things. In an ideal world, our organization would have all the resources required or the capacity to provide these team needs. But the real world sometimes is hard, we have to work to achieve it but be careful to promise unrealistic goals. Generating an expectation and not being able to meet it, has a very negative impact on the team. If this behavior occurs many times, the team probably will lose trust in us and probably in the organization. If you are going to work to improve some of their needs, it's important to share with the team and also identify the priorities. Depending on the topic, the timing is also important. I believe in transparency, but transparency doesn´t mean sharing every single thing that goes through your head. For example, if you are working to increase the team salary, it would be good to verify with the organization if there is enough budget to do it before you share it with the team.


Bridging the AppSec and DevOps Disconnect

Culturally, some ingrained attitudes and behaviors challenge the success of any DevSecOps efforts. Security teams have seen DevOps processes accelerate the speed at which software is delivered, but without security considerations, while DevOps teams experienced security slowing down processes and giving inconsistent results and feedback on security issues. Each party has their own manager to please; their own set of metrics that they’re measured against and a priority list as long as their arms already. Both teams follow different processes and, crucially, use different tools. DevOps can’t get around the security tool complexity and lack of integration with their existing toolset and security teams have no control over the CI pipeline to best implement security assurance. One of the best ways to overcome this friction is through better technology, process and culture that enables collaboration between teams. First, DevOps teams do care about security, but it might be lower on their priority list. Security teams must understand that DevOps teams care about code, quality and efficiency.


ARC4 Encryption Library

The ARC4 Cryptography Provider Class Library is a DLL file for .NET projects that includes an implementation of a well-known symmetric encryption algorithm that is not present in the System.Security.Cryptography namespace of the mscorlib library. The cryptographic algorithm, known as ARC4 (Alleged RC4), is a stream cipher that is widely used in various information security systems on computer networks (for example, SSL and TLS protocols, WEP and WPA wireless security algorithms). The original RC4 stream cipher was created by Ronald Rivest of RSA Security. For seven years, the cipher was a trade secret, and the exact description of the algorithm was provided only after the signing of a non-disclosure agreement, but in September 1994 its description was anonymously sent to the mailing list of Cypherpunks. ... Despite the fact that this cipher is not recommended, ARC4 remains popular due to its simplicity of software implementation and high speed of operation. Another important advantage is the variable key length and the same amount of encrypted and original data.


A Scalable Approach for Partially Local Federated Learning

Previous approaches for partially local federated learning used stateful algorithms, which require user devices to store a state across rounds of federated training. Specifically, these approaches required devices to store local parameters across rounds. However, these algorithms tend to degrade in large-scale federated learning settings. In these cases, the majority of users do not participate in training, and users who do participate likely only do so once, resulting in a state that is rarely available and can get stale across rounds. Also, all users who do not participate are left without trained local parameters, preventing practical applications. Federated Reconstruction is stateless and avoids the need for user devices to store local parameters by reconstructing them whenever needed. When a user participates in training, before updating any globally aggregated model parameters, they randomly initialize and train their local parameters using gradient descent on local data with global parameters frozen. They can then calculate updates to global parameters with local parameters frozen.


Why Log4j Mitigation Is Fraught With Challenges

One major challenge organizations face in defending against attacks targeting Log4j is figuring out their full exposure to the threat, according to security experts. The vulnerability can be present not just on an organization's Internet-facing assets, but on internal and back-end systems, network switches, SIEM and other logging systems, internally developed and third-party apps, in SaaS and cloud services, and environments they might not even know about. The interdependencies between different applications and components mean even if a component does not directly have the vulnerability, it can still be affected by it. The way Java packing works can often make it hard to identify affected applications, Noname Security says. As an example, a Java archive (JAR) file might contain all the dependencies — including the Log4j library — of a particular component. But that JAR file might contain another JAR file that, in turn, could contain yet another JAR file — essentially burying the vulnerability several layers deep, the security vendor said.


Why employee burnout must be expected, accepted and supported this winter

All businesses must be mindful of the problem of employee burnout. According to one recent poll, 57% of employers claim that the issue is affecting turnover, retention and productivity. Another survey found that seven out of 10 workers would be willing to move jobs to try and reduce the likelihood of burnout. Across a number of leading economies, the summer of 2021 saw worker resignations reach record levels. Failure to address the burnout question in the months ahead and we could be in for a further wave of resignations early into the new year. Organisations should consider that, according to Deloitte, for every £1 spent by employers on mental health interventions, they get back £5 in reduced absence, presenteeism, and staff turnover. Our advice to progressive organisations is to look around for local examples of best-practice wellbeing support as well as burnout paid time off and apply them across every market in which they employ people. Legal obligations must always be met, wherever you operate.


Digital IDs don’t have to impinge on civil liberties and privacy

When implemented correctly, decentralized digital IDs can make it harder to infringe upon civil liberties and privacy. That said, it’s essential that these IDs are not federated or corporatized but are, instead, self-sovereign identities, fully controlled by the end-user — made entirely possible by blockchain’s trustless verification. Decentralized digital IDs are supported by a wide range of emerging technologies and techniques, leading to the creation of a truly Self-Sovereign ID, or SSI — where users hold full control over their personal data. This includes zero-knowledge proofs, a system that allows one party to verify data to another party without revealing any pertinent information, which ensures that personal information never has to be revealed or retained by third-party verifiers. Having self-sovereign identities linked to purchases and payment rails will facilitate trustless trade that also seamlessly can stay in line with regulatory expectations. Better yet, most of this upgrade would be on a software level.



Quote for the day:

"No matter how much you change, you still have to pay the price for the things you've done." -- Doug MacRay

Daily Tech Digest - December 16, 2021

The New Face of Data Management

Despite the data explosion, IT organizations haven’t necessarily changed storage strategies. They keep buying expensive storage devices because unassailable performance is required for critical or “hot” data. The reality is that all data is not diamonds. Some of it is emeralds and some of it is glass. By treating all data the same way, companies are creating needless cost and complexity. ... Yet as hot data continues to grow, the backup process becomes sluggish. So, you purchase expensive, top-of-line backup solutions to make this faster, but you still need ever-more storage for all these copies of your data. The ratio of unique data (created and captured) to replicated data (copied and consumed) is roughly 1:9. By 2024, IDC expects this ratio to be 1:10. Most organizations are backing up and replicating data that is in fact rarely accessed and better suited to low-cost archives such as in the cloud. Beyond backup and storage costs, organizations must also secure all of this data. A one-size-fits-all strategy means that all data is secured to the level of the most sensitive, critically important data.


Technology and the future of modern warfare

This digital revolution points to a new kind of hyper-modern warfare. Artificial intelligence is a good example of this. If AI can read more data in a minute than a human can read in a year, then its value to militaries is immeasurable. In a recent interview with The Daily Telegraph, the current Chief of General Staff, General Sir Mark Carleton-Smith, has acknowledged that “we are already seeing the implications of artificial intelligence, quantum computing and robotics, and how they might be applied on the battlefield”. Machine learning, for instance, has already been used to harvest key grains of intelligence from the chaff of trivial information that usually inundates analysts. All this is not to say, however, that there will be a complete obsolescence of traditional equipment and means. The British Army remains an industrial age organisation with an industrial skill set, but one which is confronted by innovation challenges. Conventional threats can still materialise at any time. The recent stationing of Russian troops along the Ukrainian border and within the Crimea – in addition to the manoeuvring of its naval forces in the Sea of Azov – is a case in point.


Attacking Natural Language Processing Systems With Adversarial Examples

The attack can potentially be used to cripple machine learning translation systems by forcing them to either produce nonsense, or actually change the nature of the translation; to bottleneck training of NLP models; to misclassify toxic content; to poison search engine results by causing faulty indexing; to cause search engines to fail to identify malicious or negative content that is perfectly readable to a person; and even to cause Denial-of-Service (DoS) attacks on NLP frameworks. Though the authors have disclosed the paper’s proposed vulnerabilities to various unnamed parties whose products feature in the research, they consider that the NLP industry has been laggard in protecting itself against adversarial attacks. The paper states: ‘These attacks exploit language coding features, such as invisible characters and homoglyphs. Although they have been seen occasionally in the past in spam and phishing scams, the designers of the many NLP systems that are now being deployed at scale appear to have ignored them completely.’


When done right, network segmentation brings rewards

Segmentation is an IT approach that separates critical areas of the network to control east-west traffic, prevent lateral movement, and ultimately reduce the attack surface. Traditionally, this is done via an architectural approach – relying on hardware, firewalls and manual work. This can often prove cumbersome and labor intensive, which is a contributing factor in 82% of respondents saying that network segmentation is a “huge task.” ... Modern segmentation uses a software-based approach that is simpler to use, faster to implement and is able to secure more critical assets. The research shows that organizations that leverage the latest approach to segmentation will realize essential security benefits, like identifying more ransomware attacks and reducing time to mitigate attacks. “The findings of the report demonstrate just how valuable a strong segmentation strategy can be for organizations looking to reduce their attack surface and stop damaging attacks like ransomware,” said Pavel Gurvich, SVP, Akamai Enterprise Security.


Neural networks can hide malware, and scientists are worried

As malware scanners can’t detect malicious payloads embedded in deep learning models, the only countermeasure against EvilModel is to destroy the malware. The payload only maintains its integrity if its bytes remain intact. Therefore, if the recipient of an EvilModel retrains the neural network without freezing the infected layer, its parameter values will change and the malware data will be destroyed. Even a single epoch of training is probably enough to destroy any malware embedded in the DL model. However, most developers use pretrained models as they are, unless they want to fine-tune them for another application. And some forms of finetuning freeze most existing layers in the network, which might include the infected layers. This means that alongside adversarial attacks, data-poisoning, membership inference, and other known security issues, malware-infected neural networks are a real threat to the future of deep learning.


5 Key Skills Needed To Become a Great Data Scientist

Data Scientists should develop the habit of critical thinking. It helps in better understanding the problem. Unless the problems are understood to the most granular level the solution can’t be good. Critical thinking helps in analyzing the different options and helps in choosing the right one. While solving data science problems it is not always a good or bad decision. A lot of options lie in the grey area between good and bad. There are so many decisions involved in a data science project. Like, choosing the right set of attributes, the right methodology, the right algorithms, the right metrics to measure the model performance, and so on. ... Coding skills are as much important to a data scientist as eyes are for an artist. Anything that a data scientist would do requires coding skills. From reading data from multiple sources, performing exploratory analysis on the data, building models, and evaluating them. ... Math is another important skill to be understood by data scientists. It will be OK for you to not be aware of some of the math concepts while learning data science. It will not be possible to excel as a data scientist without understanding the math concepts.


DeepMind Now Wants To Study The Behaviour Of Electrons, Launches An AI Tool

Density functional theory (DFT) describes matter at the quantum level, but popular approximations suffer from systematic errors that have arisen from the violation of mathematical properties of the exact functional. DeepMind has overcome this fundamental limitation by training a neural network on molecular data and on fictitious systems with fractional charge and spin. The result was the DM21 (DeepMind 21) tool. It correctly describes typical examples of artificial charge delocalization and strong correlation and performs better than traditional functionals on thorough benchmarks for main-group atoms and molecules. The company claims that DM21 accurately models complex systems such as hydrogen chains, charged DNA base pairs, and diradical transition states. The tool DM21 is a neural network to achieve the state of the art accuracy on large parts of chemistry and to accelerate scientific progress; the code has been open-sourced.


Are venture capitalists misunderstood?

The arrival of growth investors put traditional VCs under further pressure. Whereas Rock and his successors regularly got involved long before there was a product to market, others eventually realized there were opportunities further down the line, and provided vast amounts of capital to established firms that they believed had the potential to become many times larger. These investors, who included Yuri Milner and Masayoshi Son, were irresistible to ambitious tech companies. Unlike VCs, which demanded equity in exchange for funding, Milner and Son did not even want to sit on the board. Mallaby argues that huge capital injections by growth investors (and the VCs that chose to compete with them) resulted in greater control for entrepreneurs, but also weaker corporate governance and ultimately over-reach and ill-discipline. “Precisely at the point when tech companies achieved escape velocity and founders were apt to feel too sure of themselves,” Mallaby writes, “the usual forms of private or public governance would thus be suspended.”


IT security: 4 issues to watch in 2022

If infosec had a greatest hits album, basic security hygiene would be track one. Year in, year out, the root cause of many security incidents can be traced back to the fundamentals. A wide range of threats, from ransomware to cloud account hijacking to data leakage, owe much of their efficacy to surprisingly simple missteps, from a misconfigured setting (or even a default setting left unchanged) to an over-privileged user to unpatched software. ... This begs the question: What are the basics? Things like password hygiene and system patching apply across the board, but you also need to identify and agree with colleagues on “the basics” required in your specific organization. That gives you a collective standard to work toward and measure against. Moreover, the word “basic” doesn’t do some of the fundamentals justice “In my world, the basics are patch management, secure configuration, threat modeling, DAST and SAST scanning, internal and external vulnerability scanning, penetration testing, defense against phishing attacks, third-party vulnerability assessments, backup and disaster recovery, and bespoke security training,” Elston says.


Growing an Experiment-Driven Quality Culture in Software Development

When people share their thoughts and wishes, there might be different needs underlying them. For example, tech leadership can express their desire for global quantitative metrics, while they might actually need to figure out which impact they want to have in the first place and which information they need to have this impact. Remember the teams falling back to everyday business? The systemic part plays a huge role to consider in your experiments. For example, if you’re setting out to improve the quality culture of a team, think about what kind of behavior contributing to quality gets rewarded and how. If a person does a great job, yet these contributions and the resulting impact are not valued, they probably won’t get promoted for it. The main challenges usually come back to people’s interactions as well as the systems in which we are interacting. This includes building trustful relationships and shaping a safe, welcoming space where people can bring their whole authentic selves and have a chance to thrive. 



Quote for the day:
 
"The strong do what they have to do and the weak accept what they have to accept." -- Thucydides

Daily Tech Digest - December 15, 2021

Unstructured Data Will Be Key to Analytics in 2022

Many organizations today have a hybrid cloud environment in which the bulk of data is stored and backed up in private data centers across multiple vendor systems. As unstructured (file) data has grown exponentially, the cloud is being used as a secondary or tertiary storage tier. It can be difficult to see across the silos to manage costs, ensure performance and manage risk. As a result, IT leaders realize that extracting value from data across clouds and on-premises environments is a formidable challenge. Multicloud strategies work best when organizations use different clouds for different use cases and data sets. However, this brings about another issue: Moving data is very expensive when and if you need to later move data from one cloud to another. A newer concept is to pull compute toward data that lives in one place. That central place could be a colocation center with direct links to cloud providers. Multicloud will evolve with different strategies: sometimes compute comes to your data, sometimes the data resides in multiple clouds.


Developing Event-Driven Microservices

Microservices increasingly use event-driven architectures for communication and related to this many data-driven systems are also employing an event sourcing pattern of one form or another. This is when data changes are sent via events that describe the data change that are received by interested services. Thus, the data is sourced from the events, and event sourcing in general moves the source of truth for data to the event broker. This fits nicely with the decoupling paradigm of microservices. It is very important to notice that there are actually two operations involved in event sourcing, the data change being made and the communication/event of that data change. There is, therefore, a transactional consideration and any inconsistency or failure causing a lack of atomicity between these two operations must be accounted for. This is an area where TEQ has an extremely significant and unique advantage as it, the messaging/eventing system, is actually part of the database system itself and therefore can conduct both of these operations in the same local transaction and provide this atomicity guarantee.
 

Quantum computing use cases are getting real—what you need to know

Most known use cases fit into four archetypes: quantum simulation, quantum linear algebra for AI and machine learning, quantum optimization and search, and quantum factorization. We describe these fully in the report, as well as outline questions leaders should consider as they evaluate potential use cases. ... Quantum computing has the potential to revolutionize the research and development of molecular structures in the biopharmaceuticals industry as well as provide value in production and further down the value chain. In R&D, for example, new drugs take an average of $2 billion and more than ten years to reach the market after discovery. Quantum computing could make R&D dramatically faster and more targeted and precise by making target identification, drug design, and toxicity testing less dependent on trial and error and therefore more efficient. A faster R&D timeline could get products to the right patients more quickly and more efficiently—in short, it would improve more patients’ quality of life. Production, logistics, and supply chain could also benefit from quantum computing.


How Extended Security Posture Management Optimizes Your Security Stack

XSPM helps the security team to deal with the constant content configuration churn and leverages telemetry to help identify the gaps in security by generating up-to-date emerging threats feeds and providing additional test cases emulating TTPs that attackers would use, saving DevSocOps the time needed to develop those test cases. When running XSPM validation modules, knowing that the tests are timely, current, and relevant enables reflecting on the efficacy of security controls and understanding where to make investments to ensure that the configuration, hygiene and posture are maintained through the constant changes in the environment. By providing visibility and maximizing relevancy, XSPM helps verify that each dollar spent benefits risk reduction and tool efficacy through baselining and trending and automatically generating reports containing detailed recommendations covering security hardening and tool stack optimization; it dramatically facilitates conversations with the board.


Edge computing keeps moving forward, but no standards yet

As powerful as this concept of seemingly unlimited computing resources may be, however, it does raise a significant, practical question. How can developers build applications for the edge when they don’t necessarily know what resources will be available at the various locations in which their code will run? Cloud computing enthusiasts may point out that a related version of this same dilemma faced cloud developers in the past, and they developed technologies for software abstraction that essentially relieved software engineers of this burden. However, most cloud computing environments had a much smaller range of potential computing resources. Edge computing environments, on the other hand, won’t only offer more choices, but also different options across related sites (such as all the towers in a cellular network). The end result will likely be one of the most heterogeneous targets for software applications that has ever existed. Companies like Intel are working to solve some of the heterogeneity issues with software frameworks. 


The Mad Scramble To Lead The Talent Marketplace Market

While this often starts as a career planning or job matching system, early on companies realize it’s a mentoring tool, a way to connect to development programs, a way to promote job-sharing and gig work, and a way for hiring managers to find great staff. In reality, this type of solution becomes “the system for internal mobility and development,” so companies like Allstate, NetApp, and Schneider see it as an entire system for employee growth. Other companies, like Unilever, see it as a way to promote flexible work. These companies use the Talent Marketplace to encourage agile, gig-work and help people find projects or developmental assignments. Internal gig work and cross-functional projects are a massive trend (movement toward Agile), within a given function (IT, HR, Customer Service, Finance) it’s incredibly powerful. And since the marketplace democratizes opportunities, companies like Seagate see this as a diversity platform as well.


Inside the blockchain developer’s mind: Proof-of-stake blockchain consensus

The real innovation in Bitcoin (BTC) was the creation of an elegant system for combining cryptography with economics to leverage electronic coins (now called “cryptocurrencies”) to use incentives to solve problems that algorithms alone cannot solve. People were forced to perform meaningless work to mine blocks, but the security stems not from the performance of work, but the knowledge that this work could not have been achieved without the sacrifice of capital. Were this not the case, then there would be no economic component to the system. The work is a verifiable proxy for sacrificed capital. Because the network has no means of “understanding” money that is external to it, a system needed to be implemented that converted the external incentive into something the network can understand — hashes. The more hashes an account creates, the more capital it must have sacrificed, and the more incentivized it is to produce blocks on the correct fork. Since these people have already spent their money to acquire hardware and run it to produce blocks, their incentivizing punishment is easy because they’ve already been punished!


Why Intuitive Troubleshooting Has Stopped Working for You

With complicated and complex, I’m using specific terminology from the Cynefin model. Cynefin (pronounced kuh-NEV-in) is a well-regarded system management framework that categorizes different types of systems in terms of how understandable they are. It also lays out how best to operate within those different categories — what works in one context won’t work as well in another — and it turns out that these operating models are extremely relevant to engineers operating today’s production software. Broadly, Cynefin describes four categories of system: obvious, complicated, complex, and chaotic. From the naming, you can probably guess that this categorization ranges from systems that are more predictable and understandable, to those that are less — where predictability is defined by how clear the relationship is between cause and effect. Obvious systems are the most predictable; the relationship between cause and effect is clear to anyone looking at the system. Complicated systems have a cause-and-effect relationship that is well understood, but only to those with system expertise. 


2022 will see a rise in application security orchestration and correlation (ASOC)

For organisations that build software, 2022 will be the year of invisible AppSec. When AppSec tools are run automatically, and when results are integrated with existing processes and issue trackers, developers can be fixing security weaknesses as part of their normal workflows. There is no reason for developers to go to separate systems to “do security,” and no reason they should be scrolling through thousand-page PDF reports from the security team, trying to figure out what needs to be done. When security testing is automated and integrated into a secure development process, it becomes a seamless part of application development. At the same time, organisations are coming to recognise that AppSec is a critical part of risk management, and that a properly implemented AppSec programme results in business benefits. Good AppSec equals fewer software vulnerabilities, which equals less risk of catastrophe or embarrassing publicity, but also results in fewer support cases, fewer emergency updates, higher productivity, and happier customers. But how can organisations turn this knowledge into power?


Why Sustainability Is the Next Priority for Enterprise Software

To meet market and consumer demands, every enterprise will need to evolve their sustainability programs into being just as accurate and rigorous as that of financial accounting. Similarly to the The Sarbanes–Oxley Act of 2002​​ mandates practices in financial record keeping and reporting for corporations in the US, we can expect laws and consumer expectations around sustainability impacts to follow suit. In the same way that SaaS platforms, cloud computing, and digital transformation have changed how enterprises sell, hire, and invest, we’re on the cusp of similar changes within sustainability. For example, as recently as the mid-2000s, interviewing for a new corporate job meant printing out resumes, distributing paper benefits pamphlets, and signing forms that had been Xeroxed a half-dozen times. Today, numerous human resources software companies offer streamlined digital solutions for tracking candidates, onboarding new colleagues, and managing benefits. When large organizations are faced with a high volume of data in any area of their business, digitization is the inevitable solution. 



Quote for the day:

"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox