Daily Tech Digest - January 07, 2020

Wi-Fi 6 will slowly gather steam in 2020

Future Wi-Fi
Making sure devices are compliant with modern Wi-Fi standards will be crucial in the future, though it shouldn’t be a serious issue that requires a lot of device replacement outside of fields that are using some of the aforementioned specialized endpoints, like medicine. Healthcare, heavy industry and the utility sector all have much longer-than-average expected device lifespans, which means that some may still be on 802.11ac. That’s bad, both in terms of security and throughput, but according to Shrihari Pandit, CEO of Stealth Communications, a fiber ISP based in New York, 802.11ax access points could still prove an advantage in those settings thanks to the technology that underpins them. “Wi-Fi 6 devices have eight radios inside them,” he said. “MIMO and beamforming will still mean a performance upgrade, since they’ll handle multiple connections more smoothly.” A critical point is that some connected devices on even older 802.11 versions – n, g, and even b in some cases – won’t be able to connect to 802.11ax radios, so they won’t be able to benefit from the numerous technological upsides of the new standard. Making sure that a given network is completely cross-compatible will be a central issue for IT staff looking to upgrade access points that service legacy gear.


Cloud storage solutions gaining momentum through disruption by traditional vendors

Cloud storage solutions gaining momentum through disruption image
This disruption, where traditional on-prem vendors have brought their offerings out into the public cloud, has led to the emergence of more innovative cloud storage solutions. “This has given customers flexibility in how they approach storage in a cloud environment, with more enterprise-style services being offered,” continued Beale. On-prem vendors want to have storage apps in the cloud. In part, this is a marketing positioning exercise where they want to be seen as new and innovative vendors, by working in the cloud. But, there’s a second, more technical and practical reason for this changing cloud storage landscape. “Around 80% of the organisations that we speak to have some sort of cloud presence. That means they’re using cloud-based technologies and at some point, multiple cloud providers, along with an on-prem solution,” explained Beale. Having a ubiquitous big data plain across an organisation is appealing to customers, because they don’t have to spend a lot of time, money or resources on disparate platforms across multiple vendors.


Why flexible work and the right technology may just close the talent gap

https://www.citrix.com/glossary/what-is-digital-transformation.html
Increasingly what we see is that freelancers become full-time freelancers; meaning it’s their primary source of income. Usually, as a result of that, they tend to move. And when they move it is out of big cities like San Francisco and New York. They tend to move to smaller cities where the cost of living is more affordable. And so that’s true for the freelance workforce, if you will, and that’s pulling the rest of the workforce with it. What we see increasingly is that companies are struggling to find talent in the top cities where the jobs have been created. Because they already use freelancers anyway, they are also allowing their full-time employees to relocate to other parts of the country, as well as to hire people away from their headquarters, people who essentially work from home as full-time employees, remotely. ... And along the way, companies realized two things. Number one, they needed different skills than they had internally. So the idea of the contingent worker or freelance worker who has that specific expertise becomes increasingly vital.


Life on the edge: A new world for data


For many CIOs, a strategy for edge computing will be entirely new. Sunil urges CIOs to assess what parts of edge computing can be achieved in-house and what should be done through a consulting firm. “A system integrator will play a big role in bringing it all together,” he says. Chris Lloyd-Jones, emerging technology, product and engineering lead at Avanade, says large enterprises are starting to build IoT platforms to centrally manage edge computing devices and provide connectivity across geographic regions. “Edge computing is no longer just about an on-board computer where data from the device is uploaded via a USB cable,” he says. “Edge computing now handles 4G and 5G connectivity with periodic connectivity, and support for full-scale machine learning and computationally intensive workloads. Data can be transmitted to and from the cloud. This provides centralised management.” Lloyd-Jones says the cloud can be used to train machine learning models, which can then be deployed to edge devices and managed like any other IT equipment.


Microsoft: RDP brute-force attacks last 2-3 days on average


Usually, these attacks use combinations of usernames and passwords that have been leaked online after breaches at various online services, or are simplistic in nature, and easy to guess. Microsoft says that the RDP brute-force attacks it recently observed last 2-3 days on average, with about 90% of cases lasting for one week or less, and less than 5% lasting for two weeks or more. The attacks lasted days rather than hours because attackers were trying to avoid getting their attack IPs banned by firewalls. Rather than try hundreds or thousands of login combos at a time, they were trying only a few combinations per hour, prolonging the attack across days, at a much slower pace than RDP brute-force attacks have been observed before. "Out of the hundreds of machines with RDP brute force attacks detected in our analysis, we found that about .08% were compromised," Microsoft said. "Furthermore, across all enterprises analyzed over several months, on average about 1 machine was detected with high probability of being compromised resulting from an RDP brute force attack every 3-4 days," the Microsoft research team added.


AI, privacy and APIs will mold digital health in 2020

Interoperability is a major player in health tech innovation: patients will always receive care across multiple venues, and secure data exchange is key to providing continuity of care. Standardized APIs can provide the technological foundations for data sharing, extending the functionality of EHRs and other technologies that support connected care. Platforms like Validic Inform leverage APIs to share patient-generated data from personal health devices to providers, while giving them the ability to configure data streams to identify actionable data and automate triggers. In the upcoming year, look for major players like Apple and Google to make strides toward interoperability and breaking down data silos. Apple’s Health app already is capable of populating with information from other apps on your phone. Add your calorie intake to a weight loss app? Time your miles with a running app? Monitor your bedtime habits with a sleep tracking app? You’ll find that info aggregating in your Health app. Apple is uniquely positioned to be the driver of interoperability, and Google is not far behind.


Capitalizing on the promise of artificial intelligence


Remarkably, a majority of early adopters within each country believe that AI will substantially transform their business within the next three years. However, as pointed out in Is the window for AI competitive advantage closing for early adopters?—part of Deloitte’s Thinking Fast series of quick insights — the early adopters also believe that the transformation of their industry is following close on the heels of their own AI-powered business transformation. Globally, there’s a sense of urgency among adopters that now is the time to capitalize on AI, before the window for competitive advantage closes. However, comparing AI adopters across countries reveals notable differences in AI maturity levels and urgency. While many nations regard AI as crucial to their future competitiveness, these comparisons indicate that some countries are adopting AI aggressively, while others are proceeding with considerable caution—and may be at risk of being left behind. Consider Canada.


Building the ‘Intelligent Bank’ of the Future

Of high awareness within the banking industry, but not yet understood by consumers is the evolving nature of open banking, which has proceeding in stages in Europe and elsewhere, but not yet in the U.S. From the consumer perspective, people want easier ways to manage their money and make their daily life easier. Many financial institutions, on the other hand, are somewhat overwhelmed by the prospects of delivering on the open banking promise. The paradox exists between the desire to deliver more integrated solutions while being transparent around the sharing of data between multiple organizations. Most of the concerns around open banking revolve around the collection and sharing of data with third parties and the inherent risks around such sharing. There is also the need to educate both the consumer and the employee on data security. The end result is less than clear regulations around open banking, and very few organizations actually being prepared to deliver on what has been promised consumers. That said, it is interesting that more than four in ten financial institutions (41%) are looking beyond just offering banking products in the future.


Backdoors and Breaches incident response card game makes tabletop exercises fun

Backdoors & Breaches  >  Incident Response Card Game
Unlike some tabletop exercises that can take months to prepare and last for days, Backdoors and Breaches makes it simple to role-play thousands of possible security incidents, and to do so even as a weekly exercise. The game can be played just by blue teamers but could also involve a member of the legal team, management, or a member of the public relations team. The ideal game involves no more than six players to ensure that everyone is engaged and participating. "This game can be played every Thursday at lunch," Blanchard tells CSO. If the upside of the B&B card deck is the ability to instantly create thousands of scenarios from generic attack methods, the downside is that it lacks cards for specific industries, or company-specific issues. Black Hills plans for expansion decks in 2020, including one for industrial control system (ICS) security and another for web application security. The B&B deck launched at DerbyCon 2019, and Blanchard says they plan to give away free decks at every infosec conference they attend in 2020. The decks are also available on Amazon for $10 plus shipping, which, he says, just covers their costs.


An Introduction to Blazor and Web Assembly

An Introduction to Blazor and Web Assembly
Blazor is a new framework that lets you build interactive web UIs using C# instead of JavaScript. It is an excellent option for .NET developers looking to take their skills into web development without learning an entirely new language. Currently, there are two ways to work with Blazor: running on an ASP.NET Core server with a thin-client, or completely on the client’s web browser using WebAssembly instead of JavaScript. ... UI component libraries were created long before Blazor. Most existing frameworks that target web applications are based on JavaScript. They are still compatible with Blazor due to its ability to interoperate with JavaScript. Components that are primarily based on JavaScript are called wrapped JavaScript controls, as opposed to components written entirely in Blazor, which are referred to as native Blazor controls. Native Blazor controls parse the Razor syntax to generate a render tree that represents the UI and behavior of that control. The render tree is why it’s possible to run server-side Blazor. The tree is parsed and used to generate HTML on the server that’s sent to the client for rendering. In the case of Blazor WebAssembly, the render tree is parsed and rendered entirely in the client.



Quote for the day:


"No great manager or leader ever fell from heaven, its learned not inherited." -- Tom Northup


Daily Tech Digest - January 06, 2020

Deep learning vs. machine learning: Understand the differences

Deep learning vs. machine learning: Understand the differences
Dimensionality reduction is an unsupervised learning problem that asks the model to drop or combine variables that have little or no effect on the result. This is often used in combination with classification or regression. Dimensionality reduction algorithms include removing variables with many missing values, removing variables with low variance, Decision Tree, Random Forest, removing or combining variables with high correlation, Backward Feature Elimination, Forward Feature Selection, Factor Analysis, and PCA (Principal Component Analysis). Training and evaluation turn supervised learning algorithms into models by optimizing their parameter weights to find the set of values that best matches the ground truth of your data. The algorithms often rely on variants of steepest descent for their optimizers, for example stochastic gradient descent, which is essentially steepest descent performed multiple times from randomized starting points.



Why enterprises should care about DevOps


The old days of manually doing everything as an IT person are gone, and companies that are still operating that way are undergoing transformation. But I don’t think we’re ever going to get rid of operational concerns. It’s just going to be that rather than doing things manually, or through graphical consoles, you're going to work via APIs, scripting languages and automation tools like Puppet. And in many ways – and I say this quite a lot – DevOps has made operations people feel that they must become developers to get their job done. But it’s more about embracing software engineering principles. It’s about version control, release management, branching strategies, and continuous integration and delivery. We’ve seen this repeatedly, and that’s why we added features to Puppet Enterprise around continuous delivery, because the most successful customers were those that were adopting infrastructure as code.


Legal engineering: A growing trend in software development

Legal advice service concept with lawyer working for justice, law, business legislation, and paperwork expert consulting, icons with person in background
Legal engineers come from incredibly diverse backgrounds and collectively have years of experience and insights that benefit our customers tremendously. They include former attorneys from top law schools and some of the country's best law firms, experts in contract law, and a former civil rights trial attorney. We have other legal engineers who came to us from top-tier management consulting firms and several who gained considerable experience at some of Silicon Valley's best SaaS companies. These diverse backgrounds and responsibilities mean that the role of legal engineering can seem very different depending on who you ask. To our customers, they are thought partners, advising on best practices for building a modern legal team. To our product team, they are the voice of the user, listening and synthesizing valuable feedback. Sometimes, we even refer to them internally as our in-house S.W.A.T. team, because they are ready and able to jump in and help fix any situation. Ultimately, legal engineers are at the forefront of the modernization of in-house legal. As legal technology continues to evolve, so will legal engineering.


Banner: Fragmentation by Country
In this post, we look at how Fragmentation varies across the globe and key statistics you should keep in mind if you have a presence in these markets. The growth mantra of online businesses is scale — reach more users, fast. However, as you scale across countries, it’s important to ensure that your app/website is compatible with your users’ devices and browsers. Compatibility is to online businesses what distribution is to brick and mortar ones. You might have the best product in the world, but it counts for nothing if your customers don’t have the experience you designed for them. For instance, being compatible with the top 20 devices will help you cover 70% of the US audience. In India, not only will the devices be different, the coverage provided will be less than 35%. Similarly, if your mobile website doesn’t load properly in the Opera browser, you would have ignored almost half of the Nigerian market!


Industry 4.0 / Industrial IoT / Smart Factory
“This consolidation will strengthen the ability of the IIC to provide guidance and advance best practices on the uses of distributed-ledger technology across industries, and boost the commercialization of these products and services,” said 451 Research senior blockchain and DLT analyst Csilla Zsigri in a statement. Gartner vice president and analyst Al Velosa said that it’s possible the move to team up with TIoTA was driven in part by a new urgency to reach potential customers. Where other players in the IoT marketplace, like the major cloud vendors, have raked in billions of dollars in revenue, the IIoT vendors themselves haven’t been as quick to hit their sales targets. “This approach is them trying to explore new vectors for revenue that they haven’t before,” Velosa said in an interview. The IIC, whose founding members include Cisco, IBM, Intel, AT&T and GE, features 19 different working groups, covering everything from IIoT technology itself to security to marketing to strategy.



Up to half of developers work remotely; here's who's hiring them

It is estimated that there are between 18 to 21 million developers across the globe. Of this, only about one million -- or five percent -- are in the United States, so you can see how an employer in the US, or anywhere else for that matter, needs to spread its recruiting and staffing wings. It's in the best interest for tech-oriented employers, then, to be open to this global pool of talent. There are a number of companies leading the way, actively hiring globally distributed tech workforces. Glassdoor recently published a list of leading companies that encourage remote work, which includes some prominent tech companies, and Remotive has been compiling a comprehensive list of more than 2,500 companies of all sizes that hire remote IT workers. Survey data from Stack Overflow, analyzed by Itoro Ikon, finds that out of almost 89,000 developers participating in its most recent survey, 45% work remotely at least part of the time, and 10% indicated they are full-time remote workers.


The Fundamental Truth Behind Successful Development Practices: Software is Synthetic


Look across the open plan landscape of any modern software delivery organization and you will find signs of it, this way of thinking that contrasts sharply with the analytic roots of technology. Near the groves of standing desks, across from a pool of information radiators, you might see our treasured artifacts - a J-curve, a layered pyramid, a bisected board - set alongside inscriptions of productive principles. These are reminders of agile training past, testaments to the teams that still pay homage to the provided materials, having decided them worthy and made them their own. What makes these new ways of working so successful in software delivery? The answer lies in this fundamental yet uncelebrated truth - that software is synthetic. Software systems are creative compounds, emergent and generative; the product of elaborate interactions between people and technology.


5G is poised to transform manufacturing

5G mobile wireless network technology
Today, many manufacturers use as fiber, Wi-Fi and 4G LTE rather than 5G because 5G infrastructure, standards, and devices are yet to be available and proven. “But many people are starting to look at 5G today, looking at it as a more future-proof strategy than adopting 4G,” said Dan Hays, principal and head of US corporate strategy practice at PricewaterhouseCoopers LLP. “4G LTE has been around for a little over a decade.” 5G devices available today are very early ones. “They are not yet at the mass-production level and have not come down the cost curve to drive large-scale adoption,” he said. According to Erik Josefsson, vice president and head of advanced industries at Ericsson, which makes underlying 5G technology, 5G is currently at Release 15, which offers high data rate, extended coverage, and low latency compared to 4G – but doesn’t get down to the goal of 1-millisecond latency. "You can get 10 milliseconds," he said. "But you're not down to 1 millisecond yet. Release 16 is ultra-reliable low-latency, down below 10 milliseconds, for more complex use cases."


These five tech trends will dominate 2020


The constant drip-drip of data leaks and privacy catastrophes show that security is still, at best, a work-in-progress for many organisations. And security is still a minor consideration for many business leaders too.. Perhaps that's because there have been so many leaks that they think the risk to their reputation is low. It's a dangerous assumption to make. More apps and more devices mean security teams are already spread too thinly. Add in new risks like Internet of Things projects, 5G devices and deepfakes and the challenges mount unless companies take the broadest possible view of security. Organised crime and ransomware will still be the most consistent threats to most businesses; state-sponsored attacks and cyber-espionage will remain an exotic but potentially high-profile threat to a minority. For all this, the biggest risks will still be the basic ones; staff falling for phishing emails, or using their pets' names as passwords, and poorly configured cloud apps. There will always be new threats, so prepare for the strangest while not forgetting the basics.


Three Surprising Ways Archiving Data Can Save Serious Money


Until recently, backup solutions for enterprises typically fall into two strategies: tape or disk-to-disk (D2D) replication. Both of these solutions come with significant price tags to backup a single terabyte of primary data. The common misconception is that tape backup is cheap. While an actual tape might be cheap, backing up primary data with tape also requires tape libraries, servers, software, data center space, power, cooling, and management overhead. These costs add up very quickly. Our research shows that to backup a single terabyte of primary with tape could cost $138-$1,731 per year, depending on how frequently you are completing a full backup. The other common backup solution – replication – requires backup workflows that replicate data from the primary NAS system to a secondary storage platform from the same vendor. In most cases, this means that the secondary storage system is architecturally similar to the primary NAS device, requiring hardware, software, data center space, power, cooling, and management.



Quote for the day:


"There are many elements to a campaign. Leadership is number one. Everything else is number two.
 -- Bertolt Brecht


Daily Tech Digest - January 05, 2020

Overcoming Racial Bias In AI Systems And Startlingly Even In AI Self-Driving Cars

AI systems can have embedded biases, including in AI self-driving cars.
The algorithm that’s doing pattern matching might computationally begin to calculate that if someone is tall then they are a basketball player. Of course, being tall doesn’t always mean that a person is a basketball player and thus already the pattern matching is creating potential issues as to what it will do when presented with new pictures and asked to classify what the person does for a living. Realize too that there are two sides to that coin. A new picture of a tall person gets a suggested classification of being a basketball player. In addition, a new picture of a person that is not tall will be unlikely to get a suggested classification of being a basketball player (therefore, the classification approach will be inclusive and furthermore tend toward being exclusionary). In lieu of using height, the pattern matching might calculate that if someone is wearing a sports jersey, they are a basketball player.



How SwissLife France’s EAs Used Lean to Raise Their Level of Influence

From what we knew about Lean, we felt it was something that could help us get a grip again on this flow to better deliver on our mission. But we also knew that Lean applied well on activities that were already processed with an important flow of "pieces" and short cycle times. And we were completely aware that Enterprise Architecture is different in essence, since it is essentially an upstream activity where you produce abstract artifacts like plans and designs, and no concrete items. Furthermore, there is also some fuzzy logic in what architects deliver, because measuring the "quality" of an architecture is a challenge and can involve very long cycle times. At first sight, none of this was very compatible with Lean. But we had other IT teams at SwissLife which had already conducted successful Lean projects in the past, and we have had good contacts with the Lean coaches who had led them. So we decided to give Lean a go!


null
Oddly enough, the AI that can drive the explosive growth of a digital firm often isn’t even all that sophisticated. To bring about dramatic change, AI doesn’t need to be the stuff of science fiction—indistinguishable from human behavior or simulating human reasoning, a capability sometimes referred to as “strong AI.” You need only a computer system to be able to perform tasks traditionally handled by people—what is often referred to as “weak AI.” With weak AI, the AI factory can already take on a range of critical decisions. In some cases it might manage information businesses (such as Google and Facebook). In other cases it will guide how the company builds, delivers, or operates actual physical products (like Amazon’s warehouse robots or Waymo, Google’s self-driving car service). But in all cases digital decision factories handle some of the most critical processes and operating decisions. Software makes up the core of the firm, while humans are moved to the edge. Four components are essential to every factory. The first is the data pipeline, the semiautomated process that gathers, cleans, integrates, and safeguards data in a systematic, sustainable, and scalable way. The second is algorithms, which generate predictions about future states or actions of the business.



Meritocracy, Ethics and Enterprise Architecture

The big problem for all of us is that, if such an organization may turn strong enough to take enough control of the recruiting market, we may have to join, pay and play by its rules to be able to profess at all. This is also the case of some standards today which, while they provide no returns, have monopolized the training and certifications market reducing them to worthless diploma mills. Having jumped at an apparently good cause, delivering standards to the profession, an organization may cause a lot of grief later on to all of us. Think of the cost on you for refusing to adopt the standard, be trained and certified in it. It’s bad we cannot do anything about this. The good old detective question "cui bono" illustrates if not solves the dilemma of such standards showing that the organizations promoting the standards win much more than the EA community and the ultimate customers, the companies on this world. Now, does EA warrant, as such, a code of ethics and an associated organization to police the entry to and the execution of the profession?


2020 will be a challenging year for challenger banks


There are a few basic features that separate challenger banks from legacy retail banks. Signing up is extremely simple and only requires a mobile app. The mobile app itself is usually much more polished than traditional banking apps. Users receive a Mastercard or Visa debit card that communicates with the company’s server for each transaction. This way, users can receive instant notifications, block and unblock their cards and turn off some features, such as foreign payments, ATM withdrawals and online transactions. Challenger banks usually customers promise no markup fees on transactions in foreign currencies, but there are sometimes some limits on this feature. So how do these companies make money? When you pay with your card, banks generate a tiny, tiny interchange fee of money on each transaction. It’s really small, but it could become serious revenue at scale with tens of millions or hundreds of millions of users. Challenger banks also offer other financial services like insurance products, foreign exchange or consumer credit.


Fresh Cambridge Analytica leak ‘shows global manipulation is out of control’

An explosive leak of tens of thousands of documents from the defunct data firm Cambridge Analytica is set to expose the inner workings of the company that collapsed after the Observer revealed it had misappropriated 87 million Facebook profiles. More than 100,000 documents relating to work in 68 countries that will lay bare the global infrastructure of an operation used to manipulate voters on “an industrial scale” are set to be released over the next months. It comes as Christopher Steele, the ex-head of MI6’s Russia desk and the intelligence expert behind the so-called “Steele dossier” into Trump’s relationship with Russia, said that while the company had closed down, the failure to properly punish bad actors meant that the prospects for manipulation of the US election this year were even worse. The release of documents began on New Year’s Day on an anonymous Twitter account, @HindsightFiles, with links to material on elections in Malaysia, Kenya and Brazil.


Neuro-symbolic A.I. is the future of artificial intelligence. Here’s how it works

neuro symbolic ai the future mit ibmwatsonshapes
Neuro-symbolic A.I. is not, strictly speaking, a totally new way of doing A.I. It’s a combination of two existing approaches to building thinking machines; ones which were once pitted against each as mortal enemies. The “symbolic” part of the name refers to the first mainstream approach to creating artificial intelligence. From the 1950s through the 1980s, symbolic A.I. ruled supreme. To a symbolic A.I. researcher, intelligence is based on humans’ ability to understand the world around them by forming internal symbolic representations. They then create rules for dealing with these concepts, and these rules can be formalized in a way that captures everyday knowledge. If the brain is analogous to a computer, this means that every situation we encounter relies on us running an internal computer program which explains, step by step, how to carry out an operation, based entirely on logic. Provided that this is the case, symbolic A.I. researchers believe that those same rules about the organization of the world could be discovered and then codified, in the form of an algorithm, for a computer to carry out.


Common Coding Mistakes You Should Avoid


According to the single responsibility pattern, a function should only be responsible for doing one thing. And one thing only. I’ve seen way too many functions that fetch, process, and present data all in one function. It’s considered better programming to split this up. One function that fetches the data, one that processes it, and another one that presents the data. The reason it is important to keep a function focused on a single concern is that it makes it more robust. Let’s say that in the foregoing example the data got fetched from an API. If there is a change to the API—for example, there is a new version—there is a greater danger that the processing code will break if it is part of the same function. This will most likely cause the presentation of the data to break as well. ... We’ve all seen entire blocks of code containing multiple functions being commented out. No one knows why it’s still there. And no one knows if that block of commented-out code is still relevant. Yet, no one deletes that block of code, which is what you should really do with it.


9 policies and procedures you need to know about if you’re starting a new security program

blue padlock in circle pixels digital security padlock
A change management policy refers to a formal process for making changes to IT, software development and security services/operations. The goal of a change management program is to increase the awareness and understanding of proposed changes across an organization, and to ensure that all changes are conducted methodically to minimize any adverse impact on services and customers. A good example of an IT change management policy available for fair use is at SANS. An organization’s information security policies are typically high-level policies that can cover a large number of security controls. The primary information security policy is issued by the company to ensure that all employees who use information technology assets within the breadth of the organization, or its networks, comply with its stated rules and guidelines. I have seen organizations ask employees to sign this document to acknowledge that they have read it (which is generally done with the signing of the AUP policy).


Why 2019 Was Actually A Secret Success For Blockchain In Financial Services

Facebook's Libra
As interest in digital assets grows, the infrastructure for securely holding and keeping bitcoin and other cryptocurrencies in a regulated and compliant manner is considered one of the major challenges for any institutional newcomer, regardless of size and trading volume. In 2019, we saw the entrance of significant players, as both Bakkt (backed by ICE and the NYSE) and Fidelity Digital Assets are able to provide safe-keeping and custodian services on top of other services. Additionally, custody was a hot topic throughout 2019, as we saw startups like Trustology trying to get in and hope for market share while the traditional asset managers and custodians like Vanguard, State Street, and Northern Trust were slowly building products, solutions, and partnerships. In retail banking, and especially in the back-office services of settlement, reconciliation, transaction audit, and visibility, the most interesting developments in 2019 centered on stablecoins and CBDCs. We saw the launch of projects like J.P. Morgan’s own stablecoin and Utility Settlement Coin/Fnality, which is backed by banks like UBS, Barclays, and BNY Mellon, among others.



Quote for the day:


"Be with a leader when he is right, stay with him when he is still right, but, leave him when he is wrong." -- Abraham Lincoln


Daily Tech Digest - Jan 04, 2020

The role of CDOs in building trust


Data is fundamental to organisations. But at a time when companies have access to more data about their customers than ever before, an important characteristic of ethical, trustworthy organisations is how responsibly they manage that data. Perceptive business leaders understand they don’t own that data — the customer does. With that in mind, they know they must work to win trust by demonstrably acting as good custodians of customer data, keeping it safe and using it only for permitted purposes. With consumer sentiment in parallel with the demands of new and emerging data privacy laws, data governance and privacy are foundational to build and preserve customer trust and enhance customer experience and engagement. Regardless of industry, the work invested in response to the GDPR helps to build trust with customers. That could, in turn, lead to better all-round customer experiences. Much of the work to meet the requirements for GDPR compliance, required businesses to have a joined-up view of an individual’s personal data across multiple internal systems and cloud databases, with many initially focusing on customers.



Artificial Neural Networks For Blockchain: A Primer

The introduction of artificial intelligence, RNNs and especially LSTMs has enabled complex time-series forecasting, which is the sector of machine learning focused on predicting parameters in the future by referencing parameters from the past. Using data on bitcoin’s (or any cryptocurrency, for that matter) previous price points, RNNs can be trained in order to estimate its future price. This enables players in the retail industry to account for future price increases/decreases, possibly facilitating the transition to the implementation of digital currencies. It's important for technology professionals to learn as much as they can about the future of AI and neural networks in order to stay ahead of the curve. There are many great resources that can help you with this, including blogs such as Learn Neural Networks and videos from GoogleTechTalks and Geoffrey E. Hinton. Take a look around the web, and get invested in the future -- it will behoove you in more ways than you know.


What Are Plug & Platy Launguage Models


The PPLM models have three main phases. Firstly, a forward pass which is performed through the language model to compute the likelihood of the desired attribute using an attribute model that predicts probability. Secondly, by a backward pass that updates the internal latent representations using gradients from the attribute model. And, thirdly, a new distribution over the vocabulary is generated from the updated latent. This process of updating the latent is repeated at each time-step until it leads to a gradual transition towards the desired attribute. To validate the approaches of PPLM models, the researchers at Caltech and Uber AI, used both automatic and human annotators.  For instance, perplexity is an automated measure of fluency, though its effectiveness has been questioned in open-domain text generation. Perplexity was then measured using the infamous pre-trained GPT model.  In case of human annotation, annotators were asked to evaluate the fluency of each individual sample on a scale of 1-5, with 1 being “not fluent at all” and 5 being “very fluent”.


How to Effectively Employ an AI Strategy in your Business


Community plays a vital role in driving a change in any company. There are ways to connect with the community both online (webinars) as well as offline (meetups). Organizing meetups, webinars and training sessions enable one to exchange knowledge and learn from others. Learning from others, participating in sessions and sharing relevant knowledge is a great way to connect to the community. It doesn’t matter where you are. There are machine learning communities all around the world, and there may be a local chapter just next to your place. Another important reason for connecting to the community is that most of the data scientists and researchers today want to collaborate with others. The technologies in AI space are advancing at a rapid pace, and by connecting, people can ask all the right questions, share with others, participate with them and learn from everyone. Needless to say, in the last ten years, most of the cutting-edge research has come from the academic community and the open source community.


Why the quantum internet should be built in space


The problem is that entanglement is fragile and hard to preserve. Any small interaction between one of the photons and its environment breaks the link. Indeed, this is exactly what happens when physicists transmit entangled photons directly through the atmosphere or through optical fibers. The photons interact with other atoms in the atmosphere or the glass, and the entanglement is destroyed. It turns out the maximum distance over which entanglement can be shared in this way is just a few hundred kilometers. How then to build a quantum internet that shares entanglement across the globe? One option is to use “quantum repeaters”—devices that measure the quantum properties of photons as they arrive and then transfer these properties to new photons that are sent on their way. This preserves entanglement, allowing it to hop from one repeater to the next. However, this technology is highly experimental and several years from commercial exploitation. So another option is to create the entangled pairs of photons in space and broadcast them to two different base stations on the ground.


Top minds in machine learning predict where AI is going in 2020

In AI, the phrase “black box” has been around for years now. It’s used to critique neural networks’ lack of explainability, but Kidd believes 2020 may spell the end of the perception that neural networks are uninterpretable. “The black box argument is bogus … brains are also black boxes, and we’ve made a lot of progress in understanding how brains work,” she said. In demystifying this perception of neural networks, Kidd looks to the work of people like Aude Oliva, executive director of the MIT-IBM Watson AI Lab. “We were talking about this, and I said something about the system being a black box, and she chastised me reasonably [saying] that of course they’re not a black box. Of course you can dissect them and take them apart and see how they work and run experiments on them, the same [as] we do for understanding cognition,” Kidd said. Last month, Kidd delivered the opening keynote address at the Neural Information Processing Systems (NeurIPS) conference, the largest annual AI research conference in the world. Her talk focused on how human brains hold onto stubborn beliefs, attention systems, and Bayesian statistics.


Artificial Intelligence will be useful where Intelligence is!


To better understand what it is exactly that we are talking about, we will use the definitions but also the differentiations that Max Tegmark provides in popular Life 3.0. As third level life he describes the one that has the ability to design its own hardware and software (technological stage). Contrary to us, humans, Life 2.0, who modify our hardware through evolution but design most part of our software (cultural stage). Life 1.0 is life that modifies its hardware and software only through evolution (biological stage), meaning primitive organisms. That’s the stage we examine, the third stage defined as Artificial General Intelligence, meaning the ability of a system to successfully carry out any cognitive labor at least equally as good as a human would do. According to Tegmark, technosceptics believe we are still far from approaching that skill, unlike technology polemicists, digital utopians and the movement of beneficial AI who, despite their differences, think we are close to achieving that goal. As we mentioned during the introduction, Microsoft’s research neither projects nor predicts.


Deepwave: A Recurrent Neural Network For Real Time Acoustic Imaging


Though CNNs enjoy the status of being one of the most widely used architectures across many machine learning applications, they falter in the presence of more complex image reconstruction problems where the input data may not consist of an image, as is the case in biomedical imagery, interferometry, or acoustic imaging. Moreover, the authors have also observed that the standard convolutional architectures cannot handle images with non-Euclidean domains such as spherical maps produced by omnidirectional acoustic cameras. And, this is where recurrent networks have proven to be useful. A cascade of recurrent layers with trainable parameters — a variant of RNN that was proposed by Yann Lecun and his peers, was good at learning shortcuts in the reconstruction space, allowing it to achieve a prescribed reconstruction accuracy faster than gradient-based iterative methods. With techniques like pruning, the recurrent networks got even smaller with fewer parameters.


Three Considerations For Realizing Real Digital Transformation

Experience as the true north is one of the most powerful drivers of digital innovation. The science behind experience as a compass, capability and organizational muscle is well laid out. We go from problem definition to journey mapping to future state definition to tech landscaping to component architecture to building the new experience, allowing us to go from piecemeal automation to full transformation. ... To innovate at scale, you need to build the right talent, namely "bilinguals." A bilingual is neither the most evolved machine learning engineer nor the highest performing supply chain planner. This is someone who understands enough of the two to realize the value at the intersection, such as financial traders who understand machine learning or assembly line operators who understand analytics and data science. These intersections involve cross-skilling employees across disciplines and promoting a culture of curiosity and change.


AI’s future is entirely human-centric
Concerns over AI range between how it could make jobs in almost every sector obsolete, to existential worries about the threat a super intelligent, self-learning machine could pose to humanity. For every stakeholder in artificial intelligence — from the programmer to the end user — it’s important to remain focused on using this powerful technology to support humans, not replace them.  AI can be designed to empower people to share their skills and knowledge. If you think about an organisation or a community, the scope of human intelligence is vast. You may have hundreds of thousands of human brains, all with different perspectives, experiences and understanding. Unlocking this insight with AI could enhance people’s intelligence and fuel their careers. Teams can access answers and support when they need it (and pick up new skills of their own in the process).  For business leaders, using AI has clear positive knock-on effects from boosted productivity and efficiency, to greater workplace happiness and employee retention.



Quote for the day:


"People who enjoy meetings should not be in charge of anything." -- Thomas Sowell


Daily Tech Digest - January 03, 2020

2020: The year the office finds its voice?

alexa virtual assistant echo amazon alexa voice control
While conversational AI tools such as chatbots are now common, voice interfaces have been slower to arrive, according to Hayley Sutherland, a senior research analyst at IDC. But advances in the underlying natural language processing technology has made voice-based assistants accurate enough to support regular interactions. “We've seen huge leaps in natural language processing, even in the last year,” she said. That’s important because it means the assistants are less likely to misunderstand commands, which can quickly annoy users. “If I'm working with a voice assistant and it works 80% of the time, that remaining 20% is a lot in my day-to-day job; that can add up to a lot,” she said. Although advances in natural language processing (NLP) usually come from big tech companies like Microsoft, Amazon and Google with deep pockets for research and development, the availability of voice APIs gives more companies access to the technology. And those firms can create AI assistants better tailored to specific workplace scenarios.



Using the Visitor Pattern to Maintain MVVM Layering While Implementing Dialogs in WPF

The Visitor Pattern separates data from the operations to be performed on it. In this case, a dialog form needs to update or populate the fields of an object. The Visitor achieves this by having a class with overloaded methods, each accepting a specific object type (class) argument. Thus, when a "Visit" method is invoked with a specific argument type, the correct method is automatically chosen. These Visit methods are responsible for creating the correct dialog window, assigning the argument object as the DataContext of the dialog, and showing the dialog (we're assuming a modal dialog here). The Visitor is injected into the ViewModel (VM) objects in the MainWindow-loaded event handler by property injection (the ViewModel object(s) have a public Visitor property field to hold the reference to the Visitor). I believe this is simpler than using a mediator, since there is no need for events to pass between the layers. This example does not require a Dependency Injection container, although one could be applied with little difficulty. It does not reference Prism Behaviors or other external frameworks. It can be added to an existing code base with no disruption.


Ready, Set ... Stop: While Tech Speeds Audits, Regulators Slow Down Process

Automation Technology 5558c5b706074
Unfortunately, the financial accounting standards board and other regulatory bodies have not yet addressed the implications of these technologies. Blockchain is widely associated with Bitcoin and cryptocurrencies. But blockchain will transform auditing, because blockchain by its nature is an ecosystem of an incredibly secure transactions. Even now, startups and large financial services companies are developing solutions to makeover old school industries, like gas and oil, changing when and how their accounting gets done. If we step back and see what is going on with blockchain, AI, and machine learning, it is quite probable that accounting will be dramatically altered in our lifetimes. As it stands, these innovations remain ahead of the standard-setting process. When and how these bodies can address these advancements remains to be seen. But we are seeing technology disrupt allied fields, like logistics and supply chain management. Our standards bodies have no choice but to get ahead of the story—before the story writes its own plot.


Cloudian CEO: AI, IoT drive demand for edge storage


One is that they continue to just need lower-cost, easier to manage and highly scalable solutions. That's why people are shifting to cloud and looking at either public or hybrid/private. Related to that point is I think we're seeing a Cloud 2.0, where a lot of companies now realize the public cloud is not the be-all, end-all and it's not going to solve all their problems. They look at a combination of cloud-native technologies and use the different tools available wisely. I think there's the broad brush of people needing scalable solutions and lower costs -- and that will probably always be there -- but the undertone is people getting smarter about private and hybrid. Point number two is around data protection. We're now seeing more and more customers worried about ransomware. They're keeping backups for longer and longer and there is a strong need for write-once compliant storage.


Do containers need backup?

blockchain tradelens supply chain
In one sense, a typical container does not need to have its running state backed up; it is not unique enough to warrant such an operation. Furthermore, most containers are stateless – there is no data stored in the container. It’s just another running instance of a given container image that is already saved via some other operation. Many container advocates are quick to point out that high availability is built into every part of the container infrastructure. Kubernetes is always run in a cluster. Containers are always spawned and killed off as needed. Unfortunately, many confuse this high availability with the ability to recover from a disaster. To change the conversation, ask someone how they would replicate their entire Kubernetes and Docker environment should something take out their entire cluster, container nodes and associated persistent storage. Yes, there are reasons Kubernetes, Docker and associated applications need to backed up. First, to recover from disasters. What do you do if the worst happens? Second, to replicate the environment as when moving from a test/dev environment to production, or from production to staging before an upgrade.


Your excess server resources are wanted in the cloud

Your excess server resources are wanted in the cloud
Although not a new concept, we are now looking at the opportunity for those who have private servers with excess capacity to rent that capacity to a cloud service provider that can dole out those compute and storage systems on demand to anyone who needs them. If you’re thinking ride-sharing for servers, you’re not far off. In this scenario the cloud service provider is really just a broker sitting between those needing cloud services and those who have servers that can be shared. You may be leveraging servers that have excess capacity in Las Vegas on Monday and perhaps servers in London on Tuesday. You don’t care since you’re abstracted away from the physical servers, not even knowing location and true ownership. Peer-to-peer networks are nothing new. Indeed in this use case there is a clear benefit for both parties. Those with excess server capacity will make money by renting it, thus there is a revenue stream for server capacity that would normally go unused. Those consuming this service would likely pay less money than they would for most public cloud services, at least it would seem, living up to SLAs preset by the consumers.



10 top distributed apps (dApps) for blockchain

blockchain crypotocurrency bitcoin by olgasalt getty
"DApps will pool resources across numerous machines globally," said Juniper senior analyst Lauren Foye. "The results are applications which do not belong to a sole entity, [but] rather are community-driven." Bitcoin was arguably the first dApp, enabling anyone in the world to download a bit of open-source code to join a blockchain network and verify transactions using a “mining” algorithm, thereby generating digital currency (cryptocurrency) as a reward. Like a RAIDed storage array, if one of the computers (or nodes) running the dApp software goes down, another node instantaneously resumes the task. Because smart contracts, or self-executing business automation software, can interact with dApps, they're able to remove administrative overhead, making them one of most attractive features associated with blockchain. While blockchain acts as an immutable electronic ledger, confirming that transactions have taken place, smart contracts execute predetermined conditions; think about a smart contract as a computer executing on "if/then," or conditional, programming.


Father Prototype and Mother Constructor

Most people get acquainted with JavaScript like "well, it's an object oriented, dynamically typed, programming language", at least I was. Learning more and more of it, became obvious that the strength of the language lies mostly on its powerful functions. On its first-class functions. Prototypes are one of the most fundamental JavaScript features. If I start this article with objects, there will be trouble, if I start with functions, there will be double. Either way, I'll soon get into the "Chicken-and-egg" situation. I'll write some examples and point to some facts. Let's discover the rules of the prototype and build upon the constructors. Something tells me it's best to start, like everybody was starting with this language. You create an object... But, what's an object? More general on that at Appendix C. For JavaScript, "objects represent collection of named values. The named values, in JavaScript objects, are called properties. Object properties can be both primitive values, other objects, and functions. An object method is an object property containing a function definition."


Open source storage: driving intelligence in the small data sprawl era

Open source storage: driving intelligence in the small data sprawl era image
Open source, increasingly, is influencing in analytics space. The analytics space has evolved beyond things like Hadoop and MapReduce, which were very text oriented and big data lake centric, to this understanding that the world is shifting to what is termed small data sprawl. The proliferation of IoT, remote sites and offices, means that organisations want to process or analyse data remotely, while enriching that data with information from the centre. With this change there have been much more vertical offerings that are integrating the analytics with the storage itself. Manley explained: “Somebody doesn’t just want to store data for IoT. The point of IoT is that I’m processing and analysing, and we’re seeing a lot more integrated pipelines, of which storage becomes a component. And open source is by far the most popular way, whether you look at Spark or Elasticsearch, because they can evolve quickly and people can adjust them to meet the specific needs of their particular industry.”


CSS Architecture for Component-Based Applications

CSS architecture is a complex subject that is often overlooked by developers, as it's possible to encapsulate CSS per component and avoid many of the common pitfalls that relate to CSS. While this 'workaround' can make the lives of developers simpler, it does so at the cost of reusability and extendibility. When a developer defines a CSS class, it automatically affects the global scope modifying all related elements (and their children). This works great for simple applications where developers can predict the results, but can quickly become a problem when the size of the application and the team grows, and unintended results start to happen. Initially, this problem was solved by Block Element Modifier (BEM), which is a methodology and set of naming conventions that helped avoid clashes and gave developers strong indications as to what each class did e.g. form__submit--disabled tells us we are within a form, handling a submit button, and applying the disabled state.



Quote for the day:


"No organization should be allowed near disaster unless they are willing to cooperate with some level of established leadership." -- Irwin Redlener


Daily Tech Digest - January 02, 2020

Seoul to install AI cameras for crime detection

cctv-camera-istock.jpg
The cameras will automatically measure whether somebody is walking normally or tailing someone. It will also detect what passersby are wearing -- such as hats, masks, or glasses -- and what they are carrying with them such as bags or dangerous objects that have a strong possibility of being used to commit a crime. The cameras will also consider whether it is day or night. They will use this information to deduce the probability that a crime will take place, they claim. If the rate exceeds a certain rate, the cameras will alert the district office and nearby police stations to send personnel to the location. Going forward, Seocho and ETRI plan to analyse 20,000 court sentencing documents and crime footage to deduce crime patterns for the AI software to memorise. The cameras will be able to compare whether what is being filmed at the present matches past crime patterns.  "It will work like deja vu," said an ETRI spokesperson.  The AI software is still in development and the complete version will be finished by 2022, the institute said. Cameras with its capabilities will eventually be expanded to other districts in Seoul as well as other provinces, they added.



DevOps Ten Years Later: We Still Have Work to Do

The rapid pace of DevOps and agile release cycles often introduce more security bugs than the slower, siloed approaches they replace. Adding application security teams into the DevOps process may increase the learning curves/pains as most developers have little-to-no experience in application security (vulnerabilities, remediations, etc), but the end result will be fewer security issues. Any application that is "pre-DevOps" or is a third-party app gains zero benefits from DevOps. In most large enterprises, so-called "brown-field" apps comprise approximately 80% of all apps, which means there's a big burning issue of how to manage pre-DevOps/third-party apps. Runtime-based solutions including RASP bring rapid-update/remediation benefits to these classes of apps in a DevOps-like way. The compiler-based technology that Waratek has perfected allows patching, adding security rules, and even upgrading out-of-public support Java platforms in minutes, not months (or years). This eliminates the need for source code changes, production downtime, profiling, tuning, and the use of heuristics along with a lot of needless cost and performance issues.


A CISO's Security Predictions for 2020

A CISO's Security Predictions for 2020
The combination of AI and GAN technologies and the flaws inherent in all of the current technologies that leverage facial recognition to unlock smart phones, verify passport IDs and identify criminals on the street presents a rapidly growing threat, which cybercriminals will look to exploit. Extortionary deepfakes will be used to portray highly realistic videos of executives in compromising positions alongside ransomware demands tied to the threat of public domain release. Propagandized deepfakes will abound throughout the 2020 election cycle and be leveraged to discredit candidates and propel misrepresentations of truth (lies) to micro-targeted segments of voters via social media. Audio and video deepfakes will enhance the credibility of business email compromise attacks and lend an even more convincing air of authenticity to money transfer requests. And it won't take a hacking genius to pull these off. In fact, anyone can leverage AI to build convincing deepfakes without expertise in technology. Machine-learning websites available today can accept uploaded audio and videos and return deepfakes based on specific scripts.


How classroom technology is holding students back

Some studies have found positive effects, at least from moderate amounts of computer use, especially in math. But much of the data shows a negative impact at a range of grade levels. A study of millions of high school students in the 36 member countries of the Organisation for Economic Co-operation and Development (OECD) found that those who used computers heavily at school “do a lot worse in most learning outcomes, even after accounting for social background and student demographics.” According to other studies, college students in the US who used laptops or digital devices in their classes did worse on exams. Eighth graders who took Algebra I online did much worse than those who took the course in person. And fourth graders who used tablets in all or almost all their classes had, on average, reading scores 14 points lower than those who never used them—a differential equivalent to an entire grade level. In some states, the gap was significantly larger.


Bosch debuts long-range lidar sensor for autonomous vehicles

Bosch
Like all lidar sensors, Bosch’s solution measures the distance to target objects by illuminating them with laser light and measuring the reflected pulses. It’s intended for close and medium ranges on highways and in cities, and the company claims it’ll be price-competitive with rivals, thanks to economies of scale. “By filling the sensor gap, Bosch is making automated driving a viable possibility in the first place,” said Bosch management board member Harald Kroeger in a statement. “We want to make automated driving safe, convenient, and fascinating. In this way, we will be making a decisive contribution to the mobility of the future.” The lidar sensor will slot alongside the six-antennae LRR4 radar in Bosch’s ever-expanding perception portfolio. The LRR4 features a detection range of 250 meters and can recognize up to 24 objects simultaneously. But, like all radar sensors, it’s less angularly accurate than lidar as it loses sight of objects on curves. And it becomes confused if multiple vehicles are placed close to each other. 


Big Data, Health Informatics, and the Future of Medicine


The ongoing convergence of emerging technologies in cloud computing, mobility, Internet of Things (IoT), machine learning, and big data analytics is currently revolutionizing the medical and healthcare industry in ways yet to be entirely understood by most practitioners, academicians, and researchers alike. The increasing availability of biomedical information and its correspondingly high growth rate is driving us quickly to the future where personalized medicine is not only possible but will significantly help to raise the global level of life expectancy in general. Today, a big data analytics lifecycle starting with data produced using machine learning and artificial intelligence-enabled genomic sequencing technologies, to intelligent data translation and correlations, and the aggregation of a possible report for clinicians, pharmacologists, and other related researchers is becoming easily accessible. The ability to be able to collect, analyze, translate, correlate and compare large amounts of data using innovative algorithms with the chance to merge all this information within a seamless cloud analytic environment for further studies is one of the fundamental driving forces for this change.


Gartner: Top 10 strategic technology trends in 2020


The democratisation of technology means providing people with easy access to technical or business expertise without extensive or expensive training. Already referred to as “citizen access”, this trend will focus on four key areas: application development, data and analytics, design, and knowledge. Democratisation is expected to see the rise of citizen data scientists, programmers and other forms of DIY technology engagement. For example, it could enable more people to generate data models without having the skills of a data scientist. This would, in part, be made possible through AI-driven code generation. The controversial trend of human augmentation focuses on the use of technology to enhance an individual’s cognitive and physical experiences. It comes with a range of cultural and ethical implications.


Tech and Advocacy: How Today’s Youth Are Speaking Out


Technology is giving us back a little bit of that feeling of an “ongoing public forum.” Young people today have the benefit of exposure to cultural and political discussions and controversies early on. Exposure to a well-rounded collection of ideas and perspectives early in life is essential for personal development. Moreover, developing resilience requires community, relationships, and a sense of shared difficulties (or even trauma). Social media and technology can provide these things under the right circumstances. The accessibility of platforms has led to an abundance of perspectives. Young people are quick to offer their views, but they also look for organizations and brands that are equally authentic about their values. Some companies even make careers out of creating advocacy-centered content for traditional and social media channels. Public advocacy is “cool” now — and so is the sharing of opinions once thought off-limits.


Will the US Get a Federal Privacy Law?

Will the US Get a Federal Privacy Law?
In the latest attempt at building a consensus, the House Energy & Commerce Committee recently unveiled a preliminary draft of a bipartisan consumer privacy bill. The committee is now seeking comments from privacy experts, trade associations and companies. The draft side-steps several of the most divisive issues, including whether a federal law should override state privacy laws and whether individuals should be empowered to sue companies over privacy violations. These two issues have led to months of stalled negotiations. Democrats, including Rep. Maria Cantwell, D-Wash., have argued in favor of not having a federal law supersede stronger state laws. They also favor allowing individuals to sue companies for privacy violations. Meanwhile Republicans, including Rep. Roger Wicker, R-Miss., have said they will not support a federal privacy law unless it pre-empts state legislation, such as CCPA, to create uniform rules for all to follow. They also argue against giving consumers the power to file privacy lawsuits, fearing that many frivolous litigations against companies could create an unnecessary burden.



Q&A on the Book EDGE: Value-Driven Digital Transformation

One of the most uncomfortable changes for senior leaders undergoing digital transformation is the change in performance measures. The most profound of these is the switch from internal ROI to external customer value. While this is a measurement change, it is more fundamentally a change in perspective, a change in your gut-level basis of decision making. It means the first and foremost question an executive leader asks is not "How will this impact our bottom line?" but "How will this impact the value we deliver to our customers?" ROI isn’t the objective; instead, it is a constraint. You need to make a profit to continue delivering customer value however; ROI is a business benefit (internal) but not a customer value (external). Complexity theory includes a concept called a fitness function. A fitness function summarizes a specific measure to evaluate how close a solution is to achieving a stated goal.



Quote for the day:


"Absolute identity with one's cause is the first and great condition of successful leadership." -- Woodrow Wilson