Daily Tech Digest - March 19, 2022

How Radical API Design Changed the Way We Access Databases

One of the early design decisions we made at MongoDB was to focus on interaction with the database using a pure object-based API. There would be no query language. Instead, every request to the database would be described as a set of objects that were intended to be constructed by a computer as much as by a human (in many cases, more often by a computer). This approach allowed programmers to treat a complex query the same as creating a piece of imperative code. Want to retrieve all the animals in your database that have exactly two legs? Then create an object, set a member, “legs,” to two and query the database for matching objects. What you get back is an array of objects. This model extends to even the most complex operations. This approach enabled developers to build database queries as code — it was a leap from a query language mindset to a programmer’s mindset. This would significantly speed up development time and improve query performance. This API approach to database operations helped kickstart MongoDB’s rapid adoption and growth in our early years.


Software Techniques for Lemmings

The performance of a system with thousands of threads will be far from satisfying. Threads take time to create and schedule, and their stacks consume a lot of memory unless their sizes are engineered, which won't be the case in a system that spawns them mindlessly. We have a little job to do? Let's fork a thread, call join, and let it do the work. This was popular enough before the advent of <thread> in C++11, but <thread> did nothing to temper it. I don't see <thread> as being useful for anything other than toy systems, though it could be used as a base class to which many other capabilities would then be added. Even apart from these Thread Per Whatever designs, some systems overuse threads because it's their only encapsulation mechanism. They're not very object-oriented and lack anything that resembles an application framework. So each developer creates their own little world by writing a new thread to perform a new function. The main reason for writing a new thread should be to avoid complicating the thread loop of an existing thread. 


Software development is changing again. These are the skills companies are looking for

The new normal means developers will work in a variety of ways with a broad church of partners. As well as internal developers, Verastar uses outsourced capability and works closely with some key digital transformation partners, including Salesforce. "We have a very hybrid team. People need to learn to work together and across different teams. We bring everything together with Agile and sprints. Working in a virtual world means it's very rare you're all sat together in the same office now," says Clarkson, "And that's certainly the case with us. Although we've got a centre in Sale, Manchester, we've got developers that work remote, our partner works remotely, and there'll be based either nearshore or offshore as well, so you can end up with quite a wide team." Dal Virdi, IT director at legal firm Shakespeare Martineau, is another tech chief who recognises that a successful modern IT team relies on a hybrid blend of internal developers and external specialists. Virdi recognised about 18 months ago that his firm's ongoing digital transformation strategy, and the way in which the business was introducing a broad range of technologies, meant they didn't need to have internal specialists focused on one language or platform.


Concept drift vs data drift in machine learning

Concept and data drift are a response to statistical changes in the data. Hence, approaches monitoring the model’s statistical properties, predictions, and their correlation with other factors help identify the drift. But several steps need to be taken post identification to ensure the model is accurate. Two popular approaches are online machine learning and periodic retraininG. Online learning involves updating the model to learn in real-time. This allows the data to be sequential. This allows the models to take batches of samples simultaneously and optimise the batch of data in one go. Online machine learning allows us to update learners in real-time. In online learning the models are learned in a setting where it takes the batches of samples with the time and the learner optimises the batch of data in one go. Since these models work on the fixed parameters of a data stream, they must retain the new patterns of the data. Periodic retraining of the model is also critical. Since an ML model degrades every three months on average, retraining them on regular intervals can stop drift in its tracks.


The rise of zero-touch IT

First, zero-touch IT is a way to free your people from maintenance tasks, and up-level your ops team to be more strategic. You’ve noticed the Great Resignation — IT talent never grew on trees, and now there’s an epic drought. Your team’s time and abilities shouldn’t be wasted on what can be automated. Second, IT serves demanding customers. Corporate users have grown less tolerant about waiting for IT to ride to their rescue, and they have sharper expectations. After all, if they can find and load a CRM app on their phone in one minute, why can’t your technology experts provide them with a new company CRM in, say, 10 minutes? Users onboard, request privileges and carry out operations in different time zones. Automation doesn’t sleep, making it a good fit with asynchronous workforces. Third, zero-touch IT, when properly implemented, reduces mistakes caused by fatigue and overload. One distracted IT staffer can easily grant unauthorized data privileges to an outside contractor, with dire consequences. There are options for zero-touch IT; independently constructed workflows can be automated, but this can produce a spaghetti of disparate procedures that behave differently and produce confusion.


Seeing the Unseen: A New Lens on Visibility at Work

In some sense, “seeing what you want to see” means seeing what you already believe. That’s fine if you seek consensus, but it’s not a good formula for innovative thinking. Pressure is necessary to effect real change. This often involves challenging the status quo and stepping out of what has not been recognized as a fixed perspective. Surrounding yourself with people with similar experiences, beliefs, and perceptions about the world can foreclose on the possibility of thinking differently. On teams, shared assumptions can result in people coming up with the same or similar solutions to a set of challenges. While these solutions may help people like you, they may fail to address the needs of others who are not. Take, for example, the failure to optimize early smartphone cameras for darker skin tones, or how facial recognition technologies identify White faces with a higher degree of accuracy compared with those of people of color. Technological biases of this kind ensure that some people are seen, while others remain unseen or perhaps seen in a very unfavorable light.


Moore’s Law: Scientists Just Made a Graphene Transistor Gate the Width of an Atom

To be clear, the work is a proof of concept: The researchers haven’t meaningfully scaled the approach. Fabricating a handful of transistors isn’t the same as manufacturing billions on a chip and flawlessly making billions of those chips for use in laptops and smartphones. Ren also points out that 2D materials, like molybdenum disulfide, are still pricey and manufacturing high-quality stuff at scale is a challenge. New technologies like gate-all-around silicon transistors are more likely to make their way into your laptop or phone in the next few years. Also, it’s worth noting that the upshot of Moore’s Law—that computers will continue to get more powerful and cheaper at an exponential rate—can also be driven by software tweaks or architecture changes, like using the third dimension to stack components on top of one another. Still, the research does explore and better define the outer reaches of miniaturization, perhaps setting a lower bound that may not be broken for years. It also demonstrates a clever way to exploit the most desirable properties of 2D materials in chips.


New explanation emerges for robust superconductivity in three-layer graphene

Graphene is an atomically-thin sheet of carbon atoms arranged in a 2D hexagonal lattice. When two sheets of graphene are placed on top of each other and slightly misaligned, the positions of the atoms form a moirĂ© pattern or “stretched” superlattice that dramatically changes the interactions between their electrons. The degree of misalignment is very important: in 2018, researchers at the Massachusetts Institute of Technology (MIT) discovered that at a “magic” angle of 1.1°, the material switches from being an insulator to a superconductor. The explanation for this is behaviour is that, as is the case for conventional superconductors, electrons with opposite spins pair up to form “Cooper pairs” that then move though the material without any resistance below a certain critical transition temperature Tc (in this case, 1.7 K). Three years later, the Harvard experimentalists observed something similar happening in (rhombohedral) trilayer graphene, which they made by stacking three sheets of the material at small twist angles with opposite signs. In their work, the twist angle between the top and middle layer was 1.5° while that between the middle and bottom layer was -1.5°. 


Intelligent Diagramming Makes Sense of Cloud Complexities

One major issue IT leaders face is simply knowing what components their cloud environments contain. At any point, a company can have SaaS apps, databases, containers and workloads that sometimes spread across multiple cloud providers, as well as on-premises systems. The first step in mitigating these cloud complexities is to know what you have — in other words, to take inventory. Depending on which cloud providers you use — Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP) — you may need a variety of inventory tools. Learn what your provider’s default management console offers. In some cases, you may need to write scripts in order to pull data on every resource type from every corner of your cloud environment (different regions, for instance, might require separate queries). If your environment is complex enough, management consoles and scripts won’t cut it. Automated inventory tools can make it much easier to identify every component of your cloud environment. But there are still opportunities to simplify how that inventory is pulled, viewed and understood.


MLOps for Enterprise AI

There was a time when building machine learning (ML) models and taking them to production was a challenge. There were challenges with sourcing and storing quality data, unstable models, dependency on IT systems, finding the right talent with a mix of Artificial Intelligence Markup Language (AIML) and IT skills, and much more. However, times have changed. Though some of these issues still exist, there has been an increase in the use of ML models amongst enterprises. Organizations have started to see the benefits of ML models, and they continue their investments to bridge the gap and grow the use of AIML. Nevertheless, the growth of ML models in production leads to new challenges like how to manage and maintain the ML assets and monitor the models. Since 2019, there has been a surge in incorporating machine learning models into organizations, and MLOps has started to emerge as a new trending keyword. Although, it’s not just a trend; it’s a necessary element in the complete AIML ecosystem.



Quote for the day:

"It's not about how smart you are--it's about capturing minds." -- Richie Norton

Daily Tech Digest - March 18, 2022

Defining the Possible Approaches to Optimum Metadata Management

Metadata has been the focus of a lot of recent work, both in academia and industry. As more and more electronic data is generated, stored, and managed, metadata generation, storage, and management promise to improve the utilization of that data. Data and metadata are intrinsically linked, hence the concept can be found in any possible application area and can take numerous forms depending on its application context. However, it is found that metadata is often employed in scientific computations just for the initial data selection; at the most, metadata about query results are recovered after the query has been successfully executed and correlated. As a result, throughout the query processing procedure, a vast amount of information that may be useful for analyzing query results is not utilized. Thus, the data need "refinements". There are two distinct definitions of "refinements". The first is the addition of qualifiers that clarify or enlarge an element's meaning. While such modifications may be necessary or even necessary for a particular metadata application, for the sake of interoperability, the values of such elements can be regarded as subtypes of a broader element.


Data Contracts — ensure robustness in your data mesh architecture

In cases where many applications are coupled to each other, a cascading effect sometimes can be seen. Even a small change to a single application can lead to the adjustment of many applications at the same time. Therefore, many architects and software engineers avoid building coupled architectures. Data contracts are positioned to be the solution to this technical problem. A data contract guarantees interface compatibility and includes the terms of service and service level agreement (SLA). The terms of service describe how the data can be used, for example, only for development, testing, or production. The SLA typically also describes the quality of data delivery and interface. It also might include uptime, error rates, and availability, as well deprecation, a roadmap, and version numbers. Data contracts are in many cases part of a metadata-driven ingestion framework. They’re stored as metadata records, for example, in a centrally managed metastore, and play an important role for data pipeline execution, validation of data types, schemas, interoperability standards, protocol versions, defaulting rules on missing data, and so on. Therefore, data contracts include a lot of technical metadata.


What’s behind the cloud talent crisis — and how to fix it

The problem is that there aren’t enough experienced, trained engineers necessary to meet that need. And even folks who have been in the thick of cloud technology from the start are finding themselves rushing to stay abreast of the evolution of cloud technology, ensuring that they’re up on the newest skills and the latest changes. Compounding the issue, it’s an employee’s market, where job seekers are spoiled for choice by an endless number of opportunities. Companies are finding themselves in fierce competition, fishing during a drought in a pool that keeps shrinking. “It’s going to require so many more experienced, trained engineers than we currently have,” said Cloudbusing host Jez Ward during the Cloud Trends 2022 thought leadership podcast series at ReInvent. “We’re taking it exceptionally seriously, and we probably have it as our number one risk that we’re managing. As we talk to some of our partner organizations, they see this in the same way.” Cloudbusting podcast hosts Jez Ward and Dave Chapman were joined by Tara Tapper, chief people officer at Cloudreach and Holly Norman, Cloudreach’s head of AWS marketing to talk about what’s behind the tech crisis, and how companies can meet this challenge.


Meta AI’s Sparse All-MLP Model Doubles Training Efficiency Compared to Transformers

Transformer architectures have established the state-of-the-art on natural language processing (NLP) and many computer vision tasks, and recent research has shown that All-MLP (multi-layer perceptron) architectures also have strong potential in these areas. However, although newly proposed MLP models such as gMLP (Liu et al., 2021a) can match transformers in language modelling perplexity, they still lag in downstream performance. In the new paper Efficient Language Modeling with Sparse all-MLP, a research team from Meta AI and the State University of New York at Buffalo extends the gMLP model with sparsely activated conditional computation using mixture-of-experts (MoE) techniques. Their resulting sMLP sparsely-activated all-MLP architecture boosts the performance of all-MLPs in large-scale NLP pretraining, achieving training efficiency improvements of up to 2x compared to transformer-based mixture-of-experts (MoE) architectures, transformers, and gMLP.


How to build a better CIO-CMO relationship

The CIO should be regularly and actively engaging the CMO for assistance in "telling the story" of new technology investments. For example, they should share how the new HR system not only provided a good ROI and TCO, but made employees' lives easier and better. Technology vendors are well aware of the value of having their technology leaders "tell the story." The deputy CIO of Zoom spends a considerable amount of time evangelizing about the company and its products -- and is highly effective at it. Spotify has a well-regarded series of videos about how its DevOps culture helps it succeed. CIOs at non-technology companies -- or more accurately, at companies that produce products other than hardware, software and cloud services -- would do well to take a page from the technology CIO's playbook. CMOs and their teams can assist CIOs and their teams with developing a campaign to market a new technology implementation. They can ensure the campaign captures the appropriate attention of the desired constituencies, up to and including developing success metrics, so CIOs are able to assess how effective they're being.


Matter smart home standard delayed until fall 2022

The CSA is also allowing more time for the build and verification of a larger than expected number of platforms (OS’s and chipsets), which it hopes will see Matter launch with a healthy slate of compatible Matter devices, apps, and ecosystems. This need arose over the last year based on activity seen on the project’s Github repository. More than 16 platforms, including OS platforms like Linux, Darwin, Android, Tizen, and Zephyr, and chipset platforms from Infineon, Silicon Labs, TI, NXP, Nordic, Espressif Systems and Synaptics will now support Matter. “We had thought there would be four or five platforms, but it’s now more than 16,” says Mindala-Freeman. “The volume at which component and platform providers have gravitated to Matter has been tremendous.” The knock-on effect of these SDK changes is that the CSA needs to give its 50 member companies who are currently developing Matter-capable products another chance to test those devices before they go through the Matter certification process. The CSA also shared details of that initial certification process with The Verge. Following a specification validation event (SVE) this summer 


PyTorch Geometric vs Deep Graph Library

Arguably the most exciting accomplishment of deep learning with graphs so far has been the development of AlphaFold and AlphaFold2 by DeepMind, a project that has made major strides in solving the protein structure prediction problem, a longstanding grand challenge of structural biology. With myriad important applications in drug discovery, social networks, basic biology, and many other areas, a number of open-source libraries have been developed for working with graph neural networks. Many of these are mature enough to use in production or research, so how can you go about choosing which library to use when embarking on a new project? Various factors can contribute to the choice of GNN library for a given project. Not least of all is the compatibility with you and your team’s existing expertise: if you are primarily a PyTorch shop it would make sense to give special consideration to PyTorch Geometric, although you might also be interested in using the Deep Graph Library with PyTorch as the backend (DGL can also use TensorFlow as a backend).


The No-Code Approach to Deploying Deep Learning Models on Intel® Hardware

Deep learning has two broad phases: training and inference. During training, computers build artificial neural network models by analyzing thousands of inputs—images, sentences, sounds—and guessing at their meaning. A feedback loop tells the machine if the guesses are right or wrong. This process repeats thousands of times, creating a multilayered network of algorithms. Once the network reaches its target accuracy, it can be frozen and exported as a trained model. During deep learning inference, a device compares incoming data with a trained model and infers what the data means. For example, a smart camera compares video frames against a deep learning model for object detection. It then infers that one shape is a cat, another is a dog, a third is a car, and so on. During inference, the device isn’t learning; it’s recognizing and interpreting the data it receives. There are many popular frameworks—like TensorFlow PyTorch, MXNet, PaddlePaddle—and a multitude of deep learning topologies and trained models. Each framework and model has its own syntax, layers, and algorithms.


Operational resilience is much more than cyber security

To a Chief Information Officer, for example, an IT department can’t be considered operationally resilient without the accurate, actionable data necessary to keep essential business services running. To a Chief Financial Officer, meanwhile, resilience involves maintaining strong financial reporting systems in order to maintain vigilance over spend and savings. This list could run on and on, but while resilience manifests itself differently to different departments, no aspect of an enterprise organisation exists in a vacuum. True resilience involves understanding connections between different aspects of a business – and the dependencies between the various facets of its infrastructure. To understand the connections and dependencies between business services, customer journeys, business applications, and cloud / legacy infrastructure, and so on, large organisations need to invest in tools like configuration management databases (CMDBs). With the visibility and knowledge that a CMDB provides, organisations can strengthen their resilience by understanding and anticipating how disruptions to one part of their infrastructure will impact the rest


Deploying AI With an Event-Driven Platform

There are a number of crucial features required in an event-driven AI platform to provide real-time access to models for all users. The platform needs to offer self-service analytics to non-developers and citizen data scientists. These users must be able to access all models and any data required for training, context, or lookup. The platform also needs to support as many different tools, technologies, notebooks, and systems as possible because users need to access everything by as many channels and options as possible. Further, almost all end users will require access to other types of data (e.g., customer addresses, sales data, email addresses) to augment the results of these AI model executions. Therefore, the platform should be able to join our model classification data with live streams from different event-oriented data sources, such as Twitter and Weather Feeds. This is why my first choice for building an AI platform is to utilize a streaming platform such as Apache Pulsar. Pulsar is an open-source distributed streaming and publish/subscribe messaging platform that allows your machine learning applications to interact with a multitude of application types



Quote for the day:

"As a leader, you set the tone for your entire team. If you have a positive attitude, your team will achieve much more." -- Colin Powell

Daily Tech Digest - March 17, 2022

10 hard truths of change management

“We do a terrible job of understanding and navigating the emotional journey of change,” says Wanda Wallace, leadership coach and managing partner of Leadership Forum. “This is where leaders need to get smart.” While some people may welcome it, “change is also about loss — loss of my current capability while I learn new ones, loss of who I go to to solve a problem, loss of established ways of doing things,” says Wallace. “Even if someone loves the rationale for the change, they still have to grieve the loss of what was and the loss of the ease of knowing what to do even if it wasn’t efficient.” It also involves fear. “This is usually labelled as ‘resistance,’ but I find many times it is fear of not being able to learn the new skills, not being as valued after the change, not feeling competent, not being at the center of activity the way they were before the change,” says Wallace. She advises IT leaders to name those fears, acknowledge them, and talk about the journey of learning — not just from the C-suite, but at the manager level.


Feature Engineering for Machine Learning (1/3)

During EDA, one of the first steps to undertake should be to check for and remove constant features. But surely the model can discover that on its own? Yes, and no. Consider a Linear Regression model where a non-zero weight has been initialized to a constant feature. This term then serves as a secondary ‘bias’ term and seems harmless enough … but not if that ‘constant’ term was constant only in our training data, and (unbeknownst to us) later takes on a different value in our production/test data. Another thing to be on the lookout for is duplicated features. This may not be blatantly obvious when it comes to categorical data, as it might manifest as different labels names being assigned to the same attribute across different columns, e.g. One feature uses ‘XYZ’ to denote a categorical class that another feature denotes as ‘ABC’, perhaps due to the columns being culled from different databases or departments. pd.factorize() can help identify if two features are synonymous.


OpenAI’s Chief Scientist Claimed AI May Be Conscious — and Kicked Off a Furious Debate

Consciousness is at times mentioned in conversations about AI. Although inseparable from intelligence in the case of humans, it isn’t clear whether that’d be the case for machines. Those who dislike AI anthropomorphization often attack the notion of “machine intelligence.” Consciousness, being even more abstract, usually comes off worse. And rightly so, as consciousness — not unlike intelligence — is a fuzzy concept that lives in the blurred intersection of philosophy and the cognitive sciences. The origins of the modern concept can be traced back to John Locke’s work. He described it as “the perception of what passes in a man’s own mind.” However, it has proved to be an elusive concept. There are multiple models and hypotheses on consciousness that have gotten more or less interest throughout the years but the scientific community hasn’t yet arrived at a consensual definition. For instance, panpsychism — which comes to mind reading Sutskever’s thoughts — is a singular idea that got some traction recently. 


Cryptographic Truth: The Future of Trust-Minimized Computing and Record-Keeping

The focus of this article so far has been on how blockchains combine cryptography and game theory to consistently form honest consensus—the truth—regarding the validity of internal transactions. However, how can events happening outside a blockchain be reliably verified? Enter Chainlink. Chainlink is a decentralized oracle network designed to generate truth about external data and off-chain computation. In this sense, Chainlink generates truth from largely non-deterministic environments. Determinism is a feature of computation where a specific input will always lead to a specific output, i.e., code will execute exactly as written. Decentralized blockchains are said to be deterministic because they employ trust-minimization techniques that remove or lower to a near statistical impossibility any variables that could inhibit internal transaction submission, execution, and verification. The challenge with non-deterministic environments is that the truth can be subjective, difficult to obtain, or expensive to verify. 


Red Hat cloud leader defects to service mesh upstart

When service mesh first came out, Kubernetes was in such a fervor -- it had been three or four years, so people had gone through the high of it, and saw the potential, and then there was a little bit of a lull in the hype when it hadn't really exploded in terms of usage. So when service mesh came out, for certain people, it was just like, 'Oh, cool, here's the new thing.' And it was new, 1.0 sort of stuff. If you fast forward, now, four years from that, Kubernetes is now at the point where it's super stable, it's being released less often. You have a lot more companies who are deploying Kubernetes [that are] starting to build new applications. We saw a lot of companies [during] the pandemic build new applications at a faster rate than they did before. [Solo.io customer] Chick-fil-A is an example -- at their thousands of stores as a franchise, before, most people parked their car, went in the store, then came out. Nowadays, the first interaction everybody has with the store is, 'I go on the app, I place my order, I get my loyalty points.' 


Ceramic’s Web3 Composability Resurrects Web 2.0 Mashups

One of the more interesting composability projects to emerge in Web3 is Ceramic, which calls itself “a decentralized data network that brings unlimited data composability to Web3 applications.” It’s basically a data conduit between dApps (decentralized applications), blockchains, and the various flavors of decentralized storage. The idea is that a dApp developer can use Ceramic to manage “streams” of data, which can then be re-used or re-purposed by other dApps via an open API. Unlike most blockchains, Ceramic is also able to easily scale. A blog post on the Ceramic website explains that “each Ceramic node acts as an individual execution environment for performing computations and validating transactions on streams – there is no global ledger.” Also noteworthy about Ceramic is its use of DIDs (Decentralized Identifiers), a W3C web standard for authentication that I wrote about last year. The DID standard allows Ceramic users to transact with streams using decentralized identities.


Uncovering Trickbot’s use of IoT devices in command-and-control infrastructure

A significant part of its evolution also includes making its attacks and infrastructure more durable against detection, including continuously improving its persistence capabilities, evading researchers and reverse engineering, and finding new ways to maintain the stability of its command-and-control (C2) framework. This continuous evolution has seen Trickbot expand its reach from computers to Internet of Things (IoT) devices such as routers, with the malware updating its C2 infrastructure to utilize MikroTik devices and modules. MikroTik routers are widely used around the world across different industries. By using MikroTik routers as proxy servers for its C2 servers and redirecting the traffic through non-standard ports, Trickbot adds another persistence layer that helps malicious IPs evade detection by standard security systems. The Microsoft Defender for IoT research team has recently discovered the exact method through which MikroTik devices are used in Trickbot’s C2 infrastructure.


Why (and How) You Should Manage JSON with SQL

JSON documents can be large and contain values spread across tables in your relational database. This can make creating and consuming these APIs challenging because you may need to combine data from several tables to form a response. However, when consuming a service API, you have the opposite problem, that is, splitting a large (aka massive) JSON document into appropriate tables. Using custom-written code to map these elements in the application tier is tedious. Such custom code, unless super-carefully constructed by someone who knows how databases work, can also lead to many roundtrips to the database service, slowing the application to a crawl and potentially consuming excess bandwidth. ... The free-form nature of JSON is both its biggest strength and its biggest weakness. Once you start storing JSON documents in your database, it’s easy to lose track of what their structure is. The only way to know the structure of a document is to query its attributes. The JSON Data Guide is a function that solves this problem for you. 


The new CEO: Chief Empathy Officer

Top leadership has historically been responsible only for numbers and the bottom line. Profitability and utilization numbers are still important, but they generally do not motivate employees outside of the leadership, shareholders, and the board. Similarly, the feelings and well-being of the staff have long been the primary responsibility of the HR team. This no longer works in a company that is growing sustainably. The “Great Resignation” indicates that well-being has taken on a new level of critical importance. Arguably, a key contributor to this phenomenon has been employees’ lack of emotional connection to their employers. How can leaders help people feel their connection to the organization when they are physically separated? Empathy is the answer. The understanding of the empathetic leader bridges gaps and is a key component in communicating the personal role people have in the strategy of the company. In short, empathy is not just a tactic. Genuine concern for people is the ultimate business strategy for growth.


Four key considerations when moving from legacy to cloud-native

The Cloud Native Computing Foundation (CNCF) defines it as “scalable applications in modern, dynamic environments such as public, private, and hybrid clouds” – characterised by “containers, service meshes, microservices, immutable infrastructure, and declarative APIs.” However, cloud-native computing is more than just running software or infrastructure on the cloud, as cloud-only services still requires constant tweaking whenever you deploy applications. With cloud-native technology however, your applications run on stateless servers and immutable infrastructure that doesn’t require constant modification. According to a 2020 Cloud Native Foundation Survey, 51% of respondents stated improved scalability, shorter deployment time, and consistent availability as the top benefits for using cloud-native technology in their projects. Furthermore, Gartner claims more than 45% of IT spending will be reallocated from legacy systems to cloud solutions by 2024.



Quote for the day:

"Leaders are people who believe so passionately that they can seduce other people into sharing their dream." -- Warren G. Bennis,

Daily Tech Digest - March 16, 2022

The CIO's guide to understanding the metaverse

The metaverse can be a fully digital realm in which people interact as avatars, said Marty Resnick, vice president and analyst on Gartner's technology innovation team. It can also be a mix of real-world and virtual experiences, such as individuals in their home attending a real-world rock concert where they're able to see, hear and interact with those attending in person. In either case, the metaverse includes the ability to transact so that participants can use nonfungible tokens (NFTs), cryptocurrency or some other blockchain-enabled digital currency to buy and sell products and services and offer customer experience (CX) through 3D reconstructions. Right now, a number of companies are piloting initial versions of what the metaverse will be, Resnick said. But the fully realized metaverse will rest on major advances to three areas: ability to easily transport and conduct life in another realm, a realized 3D representation of the physical world and a Web3 economy. "We're already seeing these three pieces come up today, and when they all come together, that's where we'll see the true metaverse," Resnick said.


The 3 Phases of Leadership

In my 20-plus years of experience in startups -- having filled every position from new hire to CEO -- I've never seen a company reach its potential under anything less than exemplary leadership. The reverse can be true, of course. Any business can fail regardless of how good the leadership might be. But if your company leadership sucks, your business doesn't stand a chance. Leadership -- despite the millions of dollars and hours spent coaching it -- isn't that difficult to wrap your brain around. You know it when you see it, and you feel it when you lack it. In fact, the last person to realize when leadership is starting to deteriorate is usually the leader themselves. Self-understanding can be pretty opaque at the top. But let me save you a few hours in a hotel ballroom listening to a bunch of people who used to lead things you've kind of heard of. With almost any company, team, or project -- leadership has three distinct phases over time. The trick is getting to the right one and staying there as long as you can. The good news is that once you come to terms with which leadership phase you're in, it isn't terribly difficult to right your own ship.


Neural networks, the machine learning algorithm based on the human brain

If you programmed a computer to do something, the computer would always do the same thing. It would react to certain situations the way you “told” it to. This is what an algorithm is: a set of instructions to solve a certain kind of problem. But there are limitations in the instructions that humans can write down in a code. We can’t use a simple code to teach a computer how to interpret the natural language or how to make predictions, in effect how to "think" for itself. This is because a code can't be large enough to cover all possible situations, such as all of the decisions we make when we drive - like predicting what other drivers will do and deciding what we will do based on that. However, a computer is not able to react differently or correctly to these special conditions because it simply does not (or cannot) have pre-configured specific responses to them. But what if it could figure them out by itself? This is what machine learning is for —to “train” computers to learn from data and develop predictive capacities and decision-making abilities.


What Generation Z can teach us about cybersecurity

As we discussed the impact of a growing reliance on technology to share information among friends and family, attend school during the pandemic, run businesses, and more, the group considered what exactly should society seek to prevent, protect, preserve, and advance with technology. “The government and industry should do more to prepare digital citizens for breaches or attacks that may compromise personal data and privacy,” according to Sama, an 18-year-old pursuing an interest in geopolitics and counterterrorism. She began using social media in elementary school and signed various consent forms regarding the use of her data. While aggregation of user data is not all bad—information can feed technology innovation—there need to be enhanced protections for youth on social media platforms, particularly around consent. “Women and girls are empowered and more secure when they can claim more agency in their lives, especially in settings when they are not given choices,” explained Jasmine, a 19-year-old who is pursuing a career in international relations. “On the internet, I feel I have lost that control.”


Machine Learning Reimagines the Building Blocks of Computing

In addition to performance gains, the field also advances an approach to computer science that’s growing in popularity: making algorithms more efficient by designing them for typical uses. Currently, computer scientists often design their algorithms to succeed under the most difficult scenario — one designed by an adversary trying to stump them. For example, imagine trying to check the safety of a website about computer viruses. The website may be benign, but it includes “computer virus” in the URL and page title. It’s confusing enough to trip up even sophisticated algorithms. ... And this is still only the beginning, as programs that use machine learning to augment their algorithms typically only do so in a limited way. Like the learned Bloom filter, most of these new structures only incorporate a single machine learning element. Kraska imagines an entire system built up from several separate pieces, each of which relies on algorithms with predictions and whose interactions are regulated by prediction-enhanced components.


Microsoft has demonstrated the underlying physics required to create a new kind of qubit

After a two-year hiatus of in-person meetings due to the pandemic, the Station Q meetings resumed in early March. At this meeting with leaders in quantum computing from across industry and academia, we reported that we have multiple devices that have passed the TGP. Our team has measured topological gaps exceeding 30 μeV. This is more than triple the noise level in the experiment and larger than the temperature by a similar factor. This shows that it is a robust feature. This is both a landmark scientific advance and a crucial step on the journey to topological quantum computation, which relies on the fusion and braiding of anyons (the two primitive operations on topological quasiparticles). The topological gap controls the fault-tolerance that the underlying state of matter affords to these operations. More complex devices enabling these operations require multiple topological wire segments and rely on TGP as part of their initialization procedure. Our success was predicated on very close collaboration between our simulation, growth, fabrication, measurement, and data analysis teams.


Break the master-slave IT partnership with a co-creation strategy

Indulging into an IT partnership is similar to adopting a new business paradigm. Whilst IT leaders are keen for innovation it must be recognised that nurturing inclusivity can provide rapid growth in innovation. In order to bolster inclusivity, businesses can include employees from the partner teams within project discussions and implement strategies that involve beneficial engagement from both sides. Doing this not only can establish a positive inclusive culture, but also generate a variety of opinions that can lead to better decision-making. An imperative factor to note is that the process of building an inclusive culture is not a one-time thing; rather it is an ongoing practice that needs to be embedded within the workplace culture. Open and transparent communication with a human-to-human approach can enable IT partners to understand each other’s perspectives on any project and allow them to put their thoughts before others. This exchange of thoughts, opinions and suggestions can promote inclusivity in the workplace that can further strengthen the IT partnership.

Senators Request Briefing on Infrastructure Cybersecurity

The group led by Rosen and Rounds commends CISA's recently published "Shields Up" technical guidance webpage to help organizations prepare for, respond to and mitigate the impact of cyberattacks stemming from the conflict in Eastern Europe. Last month, CISA first issued the "Shields Up" warning to U.S. organizations, urging basic but crucial cyber hygiene measures that must be addressed in the face of a potential surge in Russian state-backed cybercrime. CISA and the FBI also subsequently warned of specific wiper malware targeting Ukrainian organizations. The nation's operational cyber agency issued the advisory as denial-of-service and malware attacks began surfacing last month. CISA said at the time that it had been working hand in hand with partners to identify and rapidly share information about malware and other threats. The agency also warned that Russian cyber actors could seek to exploit existing vulnerabilities to gain persistence and move laterally. 


Multimodal Bottleneck Transformer (MBT): A New Model for Modality Fusion

Transformer models consistently obtain state-of-the-art results in ML tasks, including video (ViViT) and audio classification (AST). Both ViViT and AST are built on the Vision Transformer (ViT); in contrast to standard convolutional approaches that process images pixel-by-pixel, ViT treats an image as a sequence of patch tokens (i.e., tokens from a smaller part, or patch, of an image that is made up of multiple pixels). These models then perform self-attention operations across all pairs of patch tokens. However, using transformers for multimodal fusion is challenging because of their high computational cost, with complexity scaling quadratically with input sequence length. Because transformers effectively process variable length sequences, the simplest way to extend a unimodal transformer, such as ViT, to the multimodal case is to feed the model a sequence of both visual and auditory tokens, with minimal changes to the transformer architecture. We call this a vanilla multimodal transformer model, which allows free attention flow (called vanilla cross-attention) between different spatial and temporal regions in an image, and across frequency and time in audio inputs, represented by spectrograms.


Why a modern vulnerability management strategy requires state-of-the-art solutions

Vulnerability management technology has evolved significantly in recent years, and state-of-the-art vulnerability management solutions are required to implement an effective and efficient vulnerability management plan in the modern enterprise. For starters, vulnerability identification requires a “best of breed” approach to vulnerability scanning tool selection. Vulnerability scanning vendors specialize in vulnerability identification for different layers of the technology stack, and it isn’t uncommon to have a dozen or more scanning tools in use through the organization to identify vulnerabilities in computing devices, networks, custom code, third party libraries, cloud configurations, APIs, database technologies, SaaS products, and more. Given the vast number of vulnerability scanning and identification tools typically in use throughout the enterprise, a vulnerability aggregation capability and centralized vulnerability database is key to implementing a consistent vulnerability response methodology across the organization. 



Quote for the day:

"Leaders who won't own failures become failures." -- Orrin Woodward

Daily Tech Digest - March 14, 2022

Is low-code safe and secure?

Think back to the early days of computing, when developers wrote their programs in assembly language or machine language. Developing in these low-level languages was difficult, and required highly experienced developers to accomplish the simplest tasks. Today, most software is developed using high-level programming languages, such as Java, Ruby, JavaScript, Python, and C++. Why? Because these high-level languages allow developers to write more powerful code more easily, and to focus on bigger problems without having to worry about the low-level intricacies of machine language programming. The arrival of high-level programming languages, as illustrated in Figure 1, enhanced machine and assembly language programming and generally allowed less code to accomplish more. This was seen as a huge improvement in the ability to bring bigger and better applications to fruition faster. Software development was still a highly specialized task, requiring highly specialized skills and techniques. But more people could learn these languages and the ranks of software developers grew.


The Real-World Advantages and Disadvantages of Low-Code Development Platforms

Low-code proponents point to what they claim is another distinct advantage: LDCP technologies help businesses do more with less. What is more, they promise to free skilled software engineers to focus on hard problems, on creative solutions, on what they (i.e., proponents) call “value-creating” work, as distinct to the types of recurrent, repeatable problems that MDSD and LDCP technologies aim to formalize and encapsulate in reusable applications and workflows. “We have four or five developers that … work in Mendix and they accomplish more than a team of, no lie, probably 15 to 20 developers,” Conway Solomon, CEO with Mendix customer WRSTBND, a company that provides event-management software and services, told Kavanagh. “So, what kind of cost savings is that? Especially as a small company that has a lot of ambitions, where you know, like, a lot of extra money has been [spent] on payroll, you can do it in a fraction of the cost and have the same outcome … if not better, and so we use that to our advantage.”


Meta’s Yann LeCun is betting on self-supervised learning to unlock human-compatible AI

The more popular branch of ML is supervised learning, in which models are trained on labeled examples. While supervised learning has been very successful at various applications, its requirement for annotation by an outside actor (mostly humans) has proven to be a bottleneck. First, supervised ML models require enormous human effort to label training examples. And second, supervised ML models can’t improve themselves because they need outside help to annotate new training examples. In contrast, self-supervised ML models learn by observing the world, discerning patterns, making predictions (and sometimes acting and making interventions), and updating their knowledge based on how their predictions match the outcomes they see in the world. It is like a supervised learning system that does its own data annotation. The self-supervised learning paradigm is much more attuned to the way humans and animals learn. We humans do a lot of supervised learning, but we earn most of our fundamental and commonsense skills through self-supervised learning.


How do we close the huge generation gap on flexible working?

There is a clear generational divide in where people want to work and how they see the purpose of the office. For young people, flexibility is key. They want to be in the office and connect and collaborate with co-workers face to face. That helps them onboard, form working relationships, receive guidance and soak up the company culture – all issues that workers have struggled with during the pandemic, a Microsoft and YouGov study highlighted in December. The office is often the vehicle for knowledge transfer between generations. But they also want to work from home when they need to – to look after a sick relative or wait for a repair engineer, for example. They don’t see this as a major issue, because their view isn’t based on traditional ways of working in an office every day. For them, the office space no longer stops at the office. They want to work where they want, when they want. And they want their bosses to provide the tools to help them do that. Last year, Microsoft’s Work Trend Index found that 42 percent of employees who worked from home lacked office essentials, and one in 10 didn’t have an adequate internet connection to do their job. 


How Do You Identify a Successful Scrum Master?

The Scrum team delivers a valuable Increment every single Sprint. As a framework, Scrum is focusing on delivery. Admittedly, this comes with many challenges. However, if a Scrum team is not regularly creating value for the (internal and external) stakeholders, everything else is of lesser importance. (A secondary positive effect of regularly delivering valuable Increments is building trust among stakeholders. Typically, building trust with them results in less supervision, for example, in the form of reporting duties or committees messing with Scrum—you get the idea. All of this is bolstering self-management, thus making working as a Scrum team more effective and enjoyable.) ... Other people want to join the Scrum team because nothing succeeds like success. (People voting with their feet is an excellent indicator for Scrum Master success, and it applies in both directions. My tip: Run regular, anonymous surveys in the Scrum team and ask whether team members would recommend an open position in the organization to a good friend with an agile mindset and track the development of this “employer NPS®” regularly to spot trends.)


Knox Wire Introduces an Eye-opening Network for Global Financial Settlements

The Knox Wire system was built by utilising world-class distributed ledger technology while further integrating artificial intelligence to facilitate its efficiency. It facilitates security, information authentication, and information storage on the network. It believes in the combined effort of its team to create the global settlement network through extensive experience in the development and finance sectors. Also, it holds professionalism throughout its interactions with institutions, hoping to revolutionise financial systems through innovation. The endgame is to benefit users, institutions, and eventually, governments. The onboarding process for financial institutions is straightforward, involving the beginning of an agreement with the platform. The institution will sign a contract with the settlement network and set all favourable employees by creating accounts on the Knox Wire system. Then, the network will provide AI integrations alongside its API parameters to support all the processes. 


Fintech Roundup: Due diligence makes a comeback and a former Better.com employee speaks out

We all knew – or at least some of us did, ahem – that this was likely not sustainable in the long term. Investors appeared to be backing some startups in part due to FOMO, and that’s not necessarily a good thing. So as the first quarter draws to a close, it’s clear that while in no way have fundraises come to a screeching halt, investors are starting to pump the brakes. Generally, it appears we are experiencing a market pullback – which Alex touches on in this piece – precipitated by a number of things, not the least of which – the conflict in Ukraine and disappointing performances by companies who went public in the last year. And fintech, last year’s rising star of venture, is not immune. My former colleague, Joanna Glasner, at Crunchbase News published a story on March 7 indicating that venture capitalists’ enthusiasm for fintech seems to be waning as of late. Her data point, according to Crunchbase data, was that in the two weeks leading up to her post, a total of 51 fintech companies across the globe collectively had raised $1.1 billion in seed through late-stage venture funding. 


Ensuring safety of digital communities with next-gen AI and proactive care

Safety would be easier to achieve if there was only one type of problematic behaviour online, but there are so many different categories in places you don’t expect. It’s become more difficult for consumers to protect their privacy when there’s so much software beyond the layperson’s understanding. Over a decade ago, a Cambridge Analytica-linked firm abused platforms to deceive people who held too much trust in what they saw online, swaying an election in Trinidad by encouraging people to abstain from voting, ultimately leading to the opposition party winning. It was made to look like a natural resistance movement, but it was engineered through corrupt practices. Coronavirus disinformation online has been a major battleground in the last few years. It’s hard to estimate how many lives have been potentially lost because people trusted unverified sources. The need for platforms to moderate user-generated content has never been more severe. Schwartz points to the importance of detecting issues early, saying, “If harmful online activity is left unchecked, its reach can grow rapidly and fester, exposing countless users to violent, extremist, or misleading content.”


Talent Shortage: Are Universities Delivering Well-Prepared IT Graduates?

Catherine Southard, vice president of engineering at D2iQ, says her company hasn’t had much success finding new grads with experience in Kubernetes and the Go programming language, in which D2iQ’s product is primarily developed. “Part of that is because the tech landscape changes so quickly. It would be great for a representative from tech companies -- maybe a panel of CTOs -- to sit down with curriculum developers every couple of years and talk through industry trends and where technology is headed, and then brainstorm how to bridge the gap between university and industry,” she says. Southard added something students can do is research jobs that look interesting, then see what tech stack those companies are using. They can then equip themselves to land those jobs by studying up on that technology by using free resources online or taking courses. She sees another area of improvement in support for internship programs. Historically, D2iQ had a program in the US, but it was expensive to operate, and it didn't lead to long-term employee retention, except for a couple of stand-out talents.


IT talent: Rethinking age in the hybrid work era

Never have an organization’s technical capabilities mattered less to its long-term differentiation and competitiveness. The rise of accessible, affordable outsourced vendors, SaaS platforms, and capabilities-on-demand means that most companies have the ability to acquire whatever leading-edge technologies and skill sets are needed at the moment. Leaders know the companies that win are the ones that get the most out of their people and teams. Resumes don’t tell us much about the skills that matter most in our current climate, and the computers we “hire” to read resume keywords tell us even less. These workers, having seen and been through countless configurations of teams, conflict, and trends have figured out how to focus on what makes a difference. We might learn more from them on how to spot and hire the unique capabilities that real people bring to real-people solutions in our workforce. ... As our work becomes physically less proximate, we need to find ways to seek out guidance – not just in classes and courses, but in real time, from our colleagues. 



Quote for the day:

"Your first and foremost job as a leader is to take charge of your own energy and then help to orchestrate the energy of those around you." -- Peter F. Drucker

Daily Tech Digest - March 13, 2022

3 leadership lessons from Log4Shell

APIs add to an organization’s attack surface, so it’s important to know where they are used. Gartner estimates that roughly 90% of web apps will soon have more of their exposed attack surface area accounted for by APIs as opposed to their own interfaces. Indeed, in 2021, malicious traffic around APIs grew by nearly 350%. Despite these trends, API use only continues to grow. Gone are the days of monolithic applications. Modern enterprise web applications are built with coupled services that communicate through APIs galore, and each component is a target for attackers if left unchecked. Pair that widened attack surface with the insane growth of APIs, and the need for strong API security is clear. Organizations need to cover their entire attack surface by implementing automated and accurate scans via user interfaces and APIs if they want to eliminate potential weak spots before they become problems. Put simply, security debt is an organization’s total inventory of unresolved security issues. These issues have a wide variety of sources, including knowledge gaps, inadequate tooling or cutting corners during testing in the race to market.


Increasing security for single page applications (SPAs)

First and foremost, the frontend code operates in an insecure environment: a user’s browser. SPAs often possess a refresh token that grants offline access to a user’s resources and can obtain new access tokens without interaction from the user. As these credentials are readable by the SPA, they are vulnerable to cross-site scripting (XSS) attacks, which can have dangerous repercussions such as attackers gaining access to users’ personal data and functionalities not normally accessible through the user interface. As the online data pool grows and hackers become more sophisticated, security must be taken seriously to protect customers’ information and businesses’ reputations. However, designing security solutions for SPAs is no easy feat. As well as the strongest browser security and simple and reliable code, software developers must consider how to deliver the best user experience – wrapping all this into a solution that can be deployed anywhere. The SPA’s web content can be deployed to many global locations via a Content Delivery Network (CDN). Web content is then close geographically to all users so that web downloads are faster.


AI and CSR can strengthen anti-corruption efforts

In addition to CSR, there has been much excitement about the future of AI in anti-corruption work. AI has increasingly become a part of our daily lives, from digital assistants like Siri and Alexa, to self-driving cars like Teslas and ride-hailing applications like Uber. Given that AI has been useful in so many ventures, anti-corruption scholars are eager to apply it to their work. In fact, AI has been described as “the next frontier in anti-corruption.” ... However, AI and anti-corruption discussions so far have mostly focused on governmental efforts to address corporate corruption, not on companies using AI to mitigate corporate corruption — even though many of them already use AI to maximize profit. In the corporate anti-corruption context, AI can provide companies with a proposed investment destinations or transactions and help detect corruption risks in such ventures and improve due diligence processes. AI can also provide more information for yearly anti-corruption policy reviews and assist in designing training based on AI analyses of company processes, reports and operations.


Data Mesh: The Balancing Act of Centralization and Decentralization

Another concept, which resonates well is data products. Managing and providing data as a product isn't the extreme of dumping raw data, which would require all consuming teams to perform repeatable work on data quality and compatibility issues. It also isn't the extreme of building an integration layer, using one (enterprise) canonical data model with strong conformation from all teams. Data product design is a nuanced approach of taking data from your (complex) operational and analytical systems and turning it into read-optimized versions for organizational-wide consumption. This approach of data product design comes with lots of best practices like aligning your data products with the language of your domain, setting clear interoperability standards for fast consumption, capturing it directly from the source of creation, addressing time-variant and non-volatile concerns, encapsulating metadata for security, ensuring discoverability, and so on. More of these best practices you can find here.


Role of the Metaverse, AI and digitalization — Are brands and consumers prepared for the new era?

The metaverse has a mostly positive impact on brands, but there are still some loopholes that worry them. For instance, the French champagne Armand de Brignac has recently filed trademark applications to register the appearance of its gold bottle packaging in virtual reality, augmented reality, video, social media and the web. Like this, many brands have established identities when it comes to product and packaging. Since this alternate reality is a fairly new territory to brands, it is difficult for them to gauge if a product or its packaging has distinctiveness outside the metaverse. Even if it does, it is unclear whether those rights will be sufficient to claim infringement inside the metaverse. Among other concerns, the metaverse also brings issues regarding privacy and security risks to light. Being an online-enabled space, it is uncertain whether consumers and brands may face new and unknown privacy and authenticity issues. The rise of the metaverse is just like that of the internet – former Amazon strategist Matthew Ball estimates that by 2027, every company will be a gaming company, implying that the metaverse will soon become a normal part of people’s lives.


Data Protection In The EU: New GDPR Right Of Access Guidelines

The right of access has a broad scope: in addition to basic personal data, according to the EDPB it also includes, for example, subjective notes made during a job application, a history of internet and search engine activity, etc. Unless explicitly stated otherwise, the request must be understood to relate to all personal data relating to the data subject, but the controller may ask the data subject to specify the request if it processes a large amount of data. This applies to each request: if a data subject makes more than one request, it would therefore not be sufficient to provide access only to the changes since the last request. Even data that may have been processed incorrectly or unlawfully should be provided. Data that has already been deleted, for example in accordance with a retention policy, and is therefore no longer available to the controller, does not need to be provided. Specifically, the controller will have to search all IT systems and other archives for personal data using search criteria that reflect the way the information is structured, for example, name and customer or employee number.


Even 'Perfect' APIs Can Be Abused

Even those organizations that do bring a proactive focus to application security tend to put more emphasis on protecting APIs created for web and mobile applications. In these cases, many organizations often incorrectly assume that their web application firewalls (WAFs) will bear much of the load of securing this type of API usage. But the biggest API protection gap intended — even in sophisticated organizations — is protection of APIs that are open to partners. These APIs are ripe for abuse. Even if they are perfectly written and have no vulnerabilities, they can be abused in unanticipated ways to expose the core business functions and data of the organizations that share them. Perhaps the best example of this is the Cambridge Analytica (CA) scandal that rocked Facebook in 2018. As a brief refresher, CA exploited Facebook's open API to gather extensive data about at least 87 million users. This was accomplished by using a Facebook quiz app that exploited a permissive setting that allowed third-party apps to collect information about the quiz-taker, as well as all of their friends' interests, location data, and more.


Five cloud security risks your business needs to address

“Misconfigurations remain a top risk for cloud applications and data,” says Paul Bischoff, privacy advocate and editor at Comparitech, a website that rates technologies on their cybersecurity. A misconfiguration happens when an IT team inadvertently leaves the door open for hackers by, say, failing to change a default security setting. This is often down to human error and/or a misunderstanding of how a firm’s systems operate and interact. If misconfigurations happen on a non-cloud-connected network, they’re self-contained and, potentially, accessible only to those in the physical workplace. But, once your data is in the cloud, “it is subject to someone else’s security. You do not have any direct control or ability to test it,” notes Steven Furnell, professor of cybersecurity at the University of Nottingham. “This means trusting another party’s measures, so look for the appropriate assurances from them rather than making assumptions.” 


8 technology trends for innovative leaders in a post-pandemic world

Leaders today are faced with the task of taking difficult decisions that can have a profound impact on their workforce and employee wellbeing (although it’s not all grim) in a very uncertain environment. New risks have also emerged with the staggering amount of data created on the internet, such as cyber-attacks that are increasingly frequent and costly. What our Young Global Leaders know well is that it’s easy to lead when times are going well, but real responsibility emerges when you must stand up for what you believe in. Responsible leaders truly shine in times of crisis. With this in mind, we asked eight Young Global Leaders how they will leverage technology and innovate to become better leaders in 2022. New computational and AI tools are already being used by business leaders to guide strategic decision-making. In the next decade, this software will become more powerful and will be applied in new and different settings. Built upon the mathematics of game theory, AI tools harness the computational innovations that power chess engines.


As cloud costs spiral upward, enterprises turn to a thing called FinOps

Enter FinOps. This practice is intended to help organizations get maximum business value from cloud "by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions," according to the FinOps Foundation. (Yes, there's now even an entire foundation devoted to the practice.) In many cases, they are practicing the art of FinOps without even calling it that. Respondents are actively involved in the ongoing usage and cost management for both SaaS (69%) and public cloud IaaS and PaaS (66%). "More and more users are swimming in the FinOps side of the pool, even if they may not know it -- or call it FinOps yet," the Flexera survey's authors state. In addition, for the sixth year in a row, "optimizing the existing use of cloud is the top initiative for all respondents, underscoring the need for FinOps teams or similar ways to improve cost savings initiatives," they also note. While the survey doesn't explicitly ask about FinOps adoption, the authors also state that some organizations have organized FinOps teams to assist in evaluating cloud computing metrics and value.



Quote for the day:

"The art of leadership is saying no, not yes. It is very easy to say yes." -- Tony Blair