Daily Tech Digest - April 24, 2019

Edge computing is in most industries’ future

istock 1019389496
The growth of edge computing is about to take a huge leap. Right now, companies are generating about 10% of their data outside a traditional data center or cloud. But within the next six years, that will increase to 75%, according to Gartner. That’s largely down to the need to process data emanating from devices, such as Internet of Things (IoT) sensors. Early adopters include: Manufacturers: Devices and sensors seem endemic to this industry, so it’s no surprise to see the need to find faster processing methods for the data produced. A recent Automation World survey found that 43% of manufacturers have deployed edge projects. Most popular use cases have included production/manufacturing data analysis and equipment data analytics; Retailers: Like most industries deeply affected by the need to digitize operations, retailers are being forced to innovate their customer experiences. To that end, these organizations are “investing aggressively in compute power located closer to the buyer,” writes Dave Johnson, executive vice president of the IT division at Schneider Electric. 


Why fintech is the sharing economy’s final frontier

Fintech through sharing economy applications threatens to break up the banking complex as we know it – all through the social capital and economic sharing. Evidently, this is where investors feel the sharing economy has the most potential. Fintech is already attracting huge amounts of investment and is the biggest sector in terms of Venture Capital investments. Peer-to-peer lending, equity crowdfunding, and payment possibilities are proving to be the three areas with the biggest potential for fintech through the sharing economy start to come into full focus. Firstly, equity crowdfunding helps to create a two-sided marketplace between investors and startups. This means a slice of the private company is sold for capital, typically through the sale of securities like shares, convertible note, debt, or a revenue share. This process is similar to crowdfunding or Kickstarter campaigns but with possible payouts for those willing to put their money where their mouth is. 


intel 9th gen logo
For now, Intel’s 9th-gen mobile chips are targeting the sort of dazzing, high-end gaming notebooks most of us unfortunately can’t afford or can’t bear to lug around. If that’s the case, be patient: Separately, Intel announced a metric ton of new 9th-gen desktop processors, about six months after it announced its own Core i9-9900K. Many of the more mainstream mobile 9th-gen Core chips will likely debut in late summer or fall. Until then, read on to get an idea of what to expect. ... The new 9th-gen Core chips are designed with Intel’s “300-series” mobile chipsets in mind: the Intel CM246, Intel QM370, or Intel HM370 chipsets. A key performance advantage are the x16 channels of PCI Express 3.0 directly off of the CPU, which provide enough bandwidth for an upcoming generation of third-party discrete GPUs. A full 128GB of DDR4 memory isn’t anything to sneeze at, either. The new 9th-gen mobile Core chips also ship with integrated graphics, part of what Intel calls “Generation 9.5.”


Boosting data strategies by combining cloud and infrastructure management

By receiving insights into IT operations including hardware, software, and network environments, these tools support daily operations through real-time monitoring, reduce downtime, and maintain business productivity. The current increase in integrating next generation technologies such as machine learning and artificial intelligence is positioning infrastructure management as an attractive choice for IT teams. One major benefit infrastructure management gives data center managers is the ability to monitor all aspects of IT operations. Through an intuitive platform that allows you to tap into the full value of existing infrastructure, these platforms fuel modernization through intelligent software by providing complete visibility and control over the environment. For example, companies using infrastructure management tools see up to a 40 percent increase in operating efficiencies. From these insights, operators can take control of power consumption, real-time data center health, and preventative analytics.


The FBI's RAT: Blocking Fraudulent Wire Transfers

The FBI's RAT: Blocking Fraudulent Wire Transfers
As much as it might seem like fighting internet crime is like pushing the tide with a broom, there is a bright spot in the gloom. In February 2018, the IC3 created what it terms the RAT, or Recovery Asset Team. Its goal is to contact financial institutions quickly to freeze suspicious pending wire transfers before they're final. Much internet-enabled crime eventually intersects with banking systems. So while it may be difficult to prevent scams, there is a touch point where with industrywide cooperation, stolen funds can be recovered. But time is tight, and swiftly contacting financial institutions is key to stopping stolen funds from being withdrawn.IC3 reports that the bureau's RAT group - working with what's termed the Domestic Financial Fraud Kill Chain - handled 1,061 incidents between its launch and the end of last year, covering an 11-month period. Those incidents caused losses of more than $257 million. Of that, the RAT achieved a laudable 75 percent recovery rate, or more than $192 million.


GraphQL: Core Features, Architecture, Pros, and Cons


A GraphQL server provides a client with a predefined schema — a model of the data that can be requested from the server. In other words, the schema serves as a middle ground between the client and the server while defining how to access the data. Written down in Schema Definition Language (SDL), basic components of a GraphQL schema — types — describe kinds of object that can be queried on that server and the fields they have. The schema defines what queries are allowed to be made, what types of data can be fetched, and the relationships between these types. You can create a GraphQL schema and build an interface around it with any programming language. Having the schema before querying, a client can validate their query against it to make sure the server will be able to respond to the query. While the shape of a GraphQL query closely matches the result, you can predict what will be returned. This eliminates such unwelcome surprises as unavailable data or a wrong structure.


Why composable infrastructure goes hand in hand with private cloud


"With composable infrastructure, I can have a physical server dynamically provisioned in the size and shape I need and have it all stitched together. I'm actually getting those physical server assets to use," said Mike Matchett, storage analyst and founder of Small World Big Data, an IT consultancy near Boston. The composable model differs from converged and hyper-converged systems. Converged infrastructure is sold as racks of prequalified hardware from multiple vendors and includes systems such as NetApp FlexPod and Dell EMC VxBlock. Hyper-converged systems ship all the necessary components -- servers, hypervisor software, network connectivity and storage -- delivered as an integrated appliance. While swapping in composable modules sounds appealing, Matchett said enterprises should methodically estimate the financial cost before taking the plunge. "Without a significant churn rate for the resources, I'm not sure composable makes much fiscal sense," he said. The cost made sense for Clearsense, based in Jacksonville, Fla.


Dark Side of Offshore Software Development

Time is money, especially if you are working against the clock to get your product on the market before the competition, as most startups do. A week’s delay may not kill your business, but few projects can afford to lose a month or more. However, offshore development is not the reason behind missed deadlines. In-house teams can fail to finish projects on time, and the expenses will be even higher. ... One of the problems of outsourcing is that even if the project progresses on schedule, it is a hostage to an offshore vendor. Agile development may provide you with interim results and functionality, but you won’t receive the full package until the vendor delivers it. If you are in a hurry to get the product on the market, you are ready to give the shirt off your back. Unscrupulous companies are willing to risk reputation and future business to squeeze you dry.


What Is Explainable Artificial Intelligence and Is It Needed?

What Is Explainable Artificial Intelligence and Is It Needed?
It is aimed to explain the reasons for new machine/deep learning systems, to determine their strengths and weaknesses and to understand how to behave in the future. The strategy to achieve this goal is to develop new or modified artificial learning techniques that will produce more definable models. These models are intended to be combined with state-of-the-art human-computer interactive interface techniques, which can convert models into understandable and useful explanation dialogs for the end user. ... “XAI is one of a handful of current DARPA programs expected to enable -the third-wave AI systems- where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real-world phenomena.” If we set out from medical practice, after examining the patient data, both the physician should understand and explain to the patient that he proposed to the concerned patient the risk of a heart attack on the recommendation of the decision support system.


How AI could save the environment

istock-925065874.jpg
Google has also used its own AI expertise to improve its energy efficiency as a company—leveraging DeepMind's machine learning capabilities, it reduced the amount of energy needed to cool its data centers by 40%. "AI is most helpful when the possible solution to a problem resides in large, highly dimensional datasets," Pucell said. "If you think about climate data, there's a wealth of traditional structured data about temperature, sea levels, emissions levels, etc. But there's also a lot of unstructured data in the form of images, video, audio, and text. When it comes to analyzing massive amounts of unstructured data, deep learning is really the only game in town." At USC, Dilkina's research group has used AI to develop optimization methods for wildlife conservation planning—an area where highly limited budgets need to be allocated to protect the most ecologically effective land, she said. Her team has also used machine learning and game theory to help protected areas fight the poaching of endangered animals, including elephants and rhinos.



Quote for the day:


"Nobody in your organization will be able to sustain a level of motivation higher than you have as their leader." -- Danny Cox


Daily Tech Digest - April 23, 2019

How and where to use serverless computing

cloud comput connect blue
Serverless has a great attraction not just for application developers, but also for systems operations personnel, says Ken Corless, principal in the cloud practice at Deloitte Consulting. Whether offered by hyperscale cloud providers or implemented on-premise with various solutions on the market, the goal of serverless computing is the same: “ruthless automation and self-service to speed the software development lifecycle,” Corless says. For IT administrators, serverless reduces the “request-response” cycle of ticket-based workloads and allows administrators to focus on higher-level tasks such as infrastructure design or creating more automation, Corless says. There are two main use cases that Corless is seeing. One is in application development for creating modern, loosely coupled services-based applications. Both function-as-a-service (FaaS) and backend-as-a-service (BaaS)—two cloud-enabled services that achieve serverless computing—can dramatically improve the productivity of a software delivery team by keeping teams small, he says.


container orchestration service
Having your application in well packaged stand-alone containers is the first step in managing multiple microservices. The next is managing all of the containers, allowing them to share resources and setting configuration for scaling. This is where a container orchestration platform comes in. When you use a container orchestration tool, you typically describe the configuration in a YAML or JSON file. This file will specify where to download the container image, how networking between containers should be handled, how to handle storage, where to push logs, and any other necessary configuration. Since this configuration lives in a text file you can add it to source control to track changes and easily include it in your CI/CD pipeline. Once containers are configured and deployed, the orchestration tool manages its lifecycle. This includes starting and stopping a service, scaling it up or down through launching replicas and restarting containers after failure. This greatly simplifies the amount of management needed.


5 Ways to Get the Most Out of Your Design Team

Building a team of solid design practitioners is just the first step towards building a successful design practice. Leaders who can create and share a vision, inspire their team and the rest of the organization, and advocate for design at the executive level are essential. Nearly two-thirds of the Level 5 companies in our study have teams led by design leaders who are directors and above, who likely have greater influence with executives, more accountability, and who are better positioned to develop strong partnerships with leaders in other functions. Ultimately, this means that the more senior the design leadership, the greater the impact on the bottom line. When compared to the Level 1 companies in our study, design leaders at the Level 5 organizations were nearly three times more likely to be involved in critical business decisions and to be peers with their counterparts in engineering and product management. They were four times more likely to own and develop key product and features with key partners, and nearly twice as likely to report directly to the CEO — underscoring the importance of empowering one’s design team within the context of the larger product team.


Processing the application in a cloud environment would require transferring all the data readings across the network. Processing at the edge, however, would eliminate the need to transfer those readings. The link between the edge and the cloud would carry only periodic reports, so it would cost less than a link that carried a constant flow of high-volume data. The tradeoff would be the continuing cost of a communication link versus the cost of locating and maintaining a processor at the edge. Processors have continued to fall in price, so edge computing will be less expensive than cloud computing in many cases. But each application is different, and organizations must carefully study before choosing an option. ... An edge computing facility can be located within a warehouse, refinery or retail store, but other options are also available. In the past few years, micro modular data centers (MMDCs) have grown in popularity. An MMDC is a complete computing facility in a box, and it contains everything included in a data center: processors, network, power, cooling and fire suppression, as well as protection from electromagnetic interference, shock and vibration.



“The way we work is changing,” said Jonathan Christensen, chief experience officer at Symphony. “Collaboration platforms and other innovations bring positive improvements that enable more flexibility and better work-life balance. But a more casual approach to workplace communications, and digital habits in general, presents major security risks. “Employees won’t keep secure practices on their own, and employers must consider how they will secure workforce communication over messaging and collaboration tools, just like they did with email.” The research also uncovered other trends that put employers at risk, including the fact that 51% of respondents are using collaboration platforms to discuss social plans, 44% are sharing memes and photos, and 18% admitted using them to ask a co-worker out on a date. This ease of communication poses a danger of creating a casual attitude to workplace communications, said the study report, because 29% admitted talking badly about a client or customer, 19% had shared a password


WannaCry Stopper Pleads Guilty to Writing Banking Malware

Hutchins, a British national, was arrested by the FBI in the U.S. and charged on Aug. 2, 2017, just before he was set to fly back to the U.K. after attending the Black Hat and Def Con security conferences. He has remained in the U.S. since then, continuing to work for Los Angeles-based Kryptos Logic, a security consultancy, where he specializes in reversing malware. Hutchins' guilty plea was filed Friday in federal court in Wisconsin. Hutchins pleaded guilty to two counts of developing and distributing malicious software aimed at collecting data that would aid in fraudulently compromising bank accounts. Each count carries a maximum penalty of five years in prison, a $250,000 fine and one year of supervised release. Hutchins could also be subject to a restitution order. As a result of his guilty plea, prosecutors have agreed to drop eight other counts against him that were lodged in a superseding indictment.


Open architecture and open source – The new wave for SD-WAN?

hello my name is open source nametag
Cloud providers and enterprises have discovered that 90% of the user experience and security problems arise due to the network: between where the cloud provider resides and where the end-user consumes the application. Therefore, both cloud providers and large enterprise with digital strategies are focusing on building their solutions based on open source stacks. Having a viable open source SD-WAN solution is the next step in the SD-WAN evolution, where it moves to involve the community in the solution. This is similar to what happens with containers and tools. Now, since we’re in 2019, are we going to witness a new era of SD-WAN? Are we moving to the open architecture with an open source SD-WAN solution? An open architecture should be the core of the SD-WAN infrastructure, where additional technologies are integrated inside the SD-WAN solution and not only complementary VNFs. There is an interface and native APIs that allow you to integrate logic into the router. This way, the router will be able to intercept and act according to the traffic.


How 5G could shape the future of banking

“5G will begin to unlock some interesting things that we've dreamt about or imagined for some time,” said Venturo, the chief innovation officer at U.S. Bank. “5G is exponentially more powerful than 4G; it has such low latency and such high bandwidth that for a lot of applications, it will make a lot of sense to use 5G instead of Wi-Fi.” Jeremy K. Balkin, head of innovation, retail banking and wealth management at HSBC USA, has a similarly enthusiastic take. “Banking today is an experience that our customers want when, where and how they choose," he said. "The benefits of 5G networks offer next-generation mobile internet connectivity, faster speeds and more reliable connections on smartphones and other devices, which we believe will benefit all consumers.” 5G may help Venturo revive projects in his innovation lab he has had to set aside because network technology could not support them.


Developer Skills for AI

Image title
What AI allows developers to do is things in the code that cannot be done algorithmically. And that is the magic. A certain problem in the AI space, like looking at a photo and determining whether it contains a cat or a dog, has been solved. Five years ago, it was next to impossible to do. Today it's trivial to do. The beauty of AI/ML is the ability to look at a massive data set and find patterns that a human would never find by looking at it. As a developer, we are accustomed to writing code that takes an input applies an algorithm and produces a known output. That's what programming is all about to a large degree. with ML, we turn that on its head a little bit, we don't know the algorithm, we don't even have an algorithm. Instead, we build an ML model, we give it the inputs, and we give it the outputs, the inputs might be here are hundred million credit card transactions that have actually taken place, the outputs are the ones that were fraudulent.


Debunking the Discourse Around Cloud Security

The apprehension with which cloud safety is met is nothing short of ironic given the fact that in many ways, the cloud is actually more secure than on-premise, largely due to cloud providers collectively investing more into security controls than businesses can on their own. Take AWS for example. Its infrastructure hasn't been hacked in years, which is very good going, especially considering attackers are constantly targeting its infrastructure. So what’s the key to cloud security? First things first, choose established public cloud players that you can rely on. They often have the resources and expertise to protect their own infrastructure. However, in terms of education, what we need is a complete paradigm shift when it comes to cloud safety, the key to which lies in understanding the shared responsibility model. The shared responsibility model is universal among cloud providers: they ensure their infrastructure will be secure, but customers need to ensure whatever they do in the cloud is secure.



Quote for the day:


"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley


Daily Tech Digest - April 22, 2019

Fujitsu completes design of exascale supercomputer, promises to productize it

Fujitsu completes design of exascale supercomputer, post-K supercomputer
The new system, dubbed “Post-K,” will feature an Arm-based processor called A64FX, a high-performance CPU developed by Fujitsu, designed for exascale systems. The chip is based off the Arm8 design, which is popular in smartphones, with 48 cores plus four “assistant” cores and the ability to access up to 32GB of memory per chip. A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), an instruction set specifically designed for Arm-based supercomputers. Fujitsu claims A64FX will offer a peak double precision (64-bit) floating point operations performance of over 2.7 teraflops per chip. The system will have one CPU per node and 384 nodes per rack. That comes out to one petaflop per rack. Contrast that with Summit, the top supercomputer in the world built by IBM for the Oak Ridge National Laboratory using IBM Power9 processors and Nvidia GPUs. A Summit rack has a peak computer of 864 teraflops.



As attacks get worse and more commonplace, it noted that companies need cybersecurity professionals more and more. But because of a perfect storm of scarce skills and high demand, security jobs come with a high salary, meaning that businesses not only struggle to find the right people, they have to pay top-dollar to get them. All of that means that cyber-criminals are having a field day, as the article illustrates. Attackers take advantage of ill-prepared companies, knowing that they are likely to be successful. It’s clear that the industry does need to improve, for the sake of customers and businesses alike. And to do that, we need good people, with the right skills. The industry has known for a while that those people are not easy to come by – there are simply not enough of them. There are a lot of reasons for that shortage, and it’s worth bearing in mind that it’s not the easiest industry to work in; the stress of the work means that mental health issues are rife.


Node.js vs. PHP: An epic battle for developer mindshare

PHP vs. Node.js: An epic battle for developer mind share
Suddenly, there was no need to use PHP to build the next generation of server stacks. One language was all it took to build Node.js and the frameworks running on the client. “JavaScript everywhere” became the mantra for some. Since that discovery, JavaScript has exploded. Node.js developers can now choose between an ever-expanding collection of excellent frameworks and scaffolding: React, Vue, Express, Angular, Meteor, and more. The list is long and the biggest problem is choosing between excellent options. Some look at the boom in Node.js as proof that JavaScript is decisively winning, and there is plenty of raw data to bolster that view. GitHub reports that JavaScript is the most popular language in its collection of repositories, and JavaScript’s kissing cousin, TypeScript, is rapidly growing too. Many of the coolest projects are written in JavaScript and many of the most popular hashtags refer to it. PHP, in the meantime, has slipped from third place to fourth in this ranking and it’s probably slipped even more in the count of press releases, product rollouts, and other heavily marketed moments.


Network analytics tools take monitoring to the next level

These tools help to identify problems as well as assist with capacity planning. Common tools include Simple Network Management Protocol (SNMP), syslog and Cisco NetFlow. While these tools provide some great information, they're siloed systems that work independently from one another. So, to perform any deep investigative work needed to determine the root cause of a particularly tricky network performance issue, IT staff would waste hours bouncing between tools. Modern network analytics tools provide a remedy to this time-consuming and complicated process. Network analytics software draws on traditional monitoring protocols and methods and then adds more sophisticated data flow collection methods. All collected data is then analyzed in real time using AI. By combining all data sources, the analytics platform can comb through far more information than ever before in order to make accurate network performance conclusions.


A Data Quality Framework for Big Data

A Data Quality Framework for Big Data
Data profiling is a good first step in judging data quality. But it is different for big data than for structured data. Structured methods of column, table, and cross-table profiling can’t easily be applied to big data. Data virtualization tools can create row/column views for some types of big data, where the views can then be profiled using relational techniques. This approach provides useful data content statistics but fails to give a full picture of the shape of the data. Visual profiling shows patterns, exceptions, and anomalies that are helpful in judging big data quality. Most “unstructured” data does have structure, but it is different from relational structure. Visual profiling will help to show the structure of document stores and graph databases, for example. Data samples can then be checked against the inferred structure to find exceptions—perhaps iteratively refining understanding of the underlying structure. Data quality judgment and structural findings should be recorded in a data catalog allowing data consumers to evaluate the usability of the data. With big data, quality must be evaluated as fit for purpose. With analytics, the need for data quality can vary widely by use case. The quality of data used for revenue forecasting, for example, may demand a higher level of accuracy than data used for market segmentation.


Google Expands ML Kit, Adds Smart Reply and Language Identification

In a recent Android blog post, Google announced the release of two new Natural Language Processing (NLP) APIs for ML Kit, a mobile SDK that brings Google Machine Learning capabilities to iOS and Android devices, including Language Identification and Smart Reply. In both cases, Google is providing domain-independent APIs that help developers analyze and generate text, speak and other types of Natural Language text. Both of these APIs are available in the latest version of the ML Kit SDK on iOS (9.0 and higher) and Android (4.1 and higher). ... Smart Reply allows for contextually-aware message response suggestions to be returned within a chat-based application. Using this feature allows for a quick, and accurate, response in a chat session. Gmail users have been using the Smart Reply feature for a couple years now on the mobile and desktop versions of the service. Now developers can include Smart Reply capabilities within their applications.


Can Blockchain Replace EDI In The Supply Chain? header
“Blockchain in B2B integration brings more agility. Today, B2B integration requires that both parties know each other, at least on a technical level, to provide ways to solve issues such as nonrepudiation and acknowledgement,” writes Forrester Research principal analyst Henry Peyret in “The Future of B2B Integration.” “Forrester expects that, in the next three to five years, blockchain technologies could be used to provide additional agility in building dynamic ecosystems.” Although EDI has built a 20-year track record of reliability, the venerable technology’s main weak point is its cost. “If there’s going to be a rationale for replacement, it might just be that blockchain is cheaper,” Fearnley says. But not everyone says the transition from EDI to blockchain is a done deal. “There have been many contenders to overthrow EDI over the years, and none of them have succeeded because EDI does what it does pretty well,” says Simon Ellis, program vice president of supply chain strategies at IDC. He adds, however, “If you can make things more secure and faster, everyone will benefit.”




coffee-cup-java.jpg
Despite that, Oracle stopped providing security updates to Java 8 in January 2019, in an attempt to force organizations into paid licensing agreements. Naturally, running out-of-date, insecure versions of Java is an exceptionally bad idea, presenting a conundrum to IT managers responsible for the deployment of Java applications: Either pay to maintain support for something that was once used for free, or—if even possible—attempt to move an application off of Java entirely. There is a viable third option, however: Using a non-Oracle distribution of Java. Because Java is still fundamentally open source, any organization that wishes to ship its own patched version of OpenJDK can do so. Red Hat—which contributes to Java upstream, and ships a number of their own products built on Java—is doing just that. Red Hat is taking the mantle of OpenJDK maintainer for versions 8 and 11, which will be supported until June 2023 and October 2024, respectively. New features are not expected for either version, as both are essentially in maintenance mode. 




Data center workers happy with their jobs -- despite the heavy demands
Overall satisfaction is pretty good, with 72% of respondents generally agreeing with the statement “I love my current job,” while a third strongly agreed. And 75% agreed with the statement, “If my child, niece or nephew asked, I’d recommend getting into IT.” And there is a feeling of significance among data center workers, with 88% saying they feel they are very important to the success of their employer. That’s despite some challenges, not the least of which is a skills and certification shortage. Survey respondents cite a lack of skills as the biggest area of concern. Only 56% felt they had the training necessary to do their job, and 74% said they had been in the IT industry for more than a decade. The industry offers certification programs, every major IT hardware provider has them, but 61% said they have not completed or renewed certificates in the past 12 months. There are several reasons why. A third (34%) said it was due to a lack of a training budget at their organization, while 24% cited a lack of time, 16% said management doesn’t see a need for training, and 16% cited no training plans within their workplace.




Closing the cyber security gender gap reflects the realities of the larger global cyber environment where there is diversity in gender, politics, social, economic, and culture. The bad guys are not only diverse in their thinking and actions, but also so are potential foreign security partners. As such, different perspectives and experience is a necessary complement to an industry that often hits an obstacle when it comes to language and terminology. More importantly, more inclusion into cyber security starts to tear down antiquated perceptions that the profession is geared toward males. This is almost ironic considering that women have played prominent roles in computers to include programming, designing computer systems to run U.S. census, and the software that supported Apollo 11 missions. Addressing the cultural perception of the cyber security industry is necessary in order to continue to better level employment levels. Part of this requires a review to ensure that compensation levels are equal. According to a 2017 global information security study, women earned less than male counterparts at every level.





Quote for the day:


"Surprise yourself every day with your own courage." -- Denholm Elliott


Daily Tech Digest - April 21, 2019

Blockchain: The Ultimate Disruptor?


What the internet did for the exchange of information, blockchain has the potential to do for the exchange of a digital asset’s value. Right now, many people in the blockchain space are talking about “tokenization,” which breaks down the ownership of an asset into digital tokens to allow wider-scale ownership of that asset. Tokenization started with initial coin offerings (ICOs) and has evolved to securitized token offerings (STOs), which have the potential to unlock the value of trillions of dollars of assets that are currently closed to the average person and make them more accessible. We’re talking about real-estate holdings, private equity, etc. When these assets are tokenized and brought into the market, it could impact the average person and how they do their financial and retirement planning, as well as where and what they choose to invest in. ... Rafia says about the wider access potential, “Currently, private equity, venture capital and other similar investments are not available to retail investors because there are a lot of regulations preventing it, as they tend to be riskier asset classes


Key in changing your enterprise is analyzing the impact of changes and planning those changes in a smart way. We do not advocate a ‘big up-front design’ approach, with huge, rigid multi-year transformation plans. Rather, in an increasingly volatile business world you need to use an interactive approach where your plans are updated regularly to match changing circumstances, typically in an agile manner. The figure below shows a simple example of dependencies between a series of changes, depicted with the pink boxes. A delay in ‘P428’ causes problems in the schedule, since ‘P472’ depends on it. Moreover, since the two changes overlap in scope (shown in the right-hand table), they could potentially be in each other’s way when they also overlap in time. This information is calculated from the combination of project schedule and architecture information, a clear example of the value of integrating this kind of structure and data in a Digital Twin.


How People Are the Real Key to Digital Transformation

An interview with Gerald C. Kane, Anh Nguyen Phillips, Jonathan R. Copulsky, and Garth R. Andrus, the authors of "The Technology Fallacy."
Digital disruption affects all levels of the organization. Our research shows, however, that higher level leaders are generally much more optimistic about how their organization is adapting to that disruption than lower level employees. This result suggests that leaders may be overestimating how well their organization is responding. In the book, we provide a framework by which leaders can survey their employees to gauge how digitally mature their organization is against 23 traits, which we refer to as the organization’s digital DNA. Digital maturity is usually unevenly distributed throughout an organization, and we encourage organizations to use this framework to assess how it is distributed so they can begin to identify and address the areas of improvement that are most likely to yield organizational benefits. ... a single set of organizational characteristics were essential for digital maturity -- accepting risk of failure as a natural part of experimenting with new initiatives, actively implementing initiatives to increase agility, valuing and encouraging experimentation and testing as a means of continuous organizational learning, recognizing and rewarding collaboration across teams and divisions, increasingly organizing around cross-functional project teams, and empowering those teams to act autonomously.


Cachalot DB as a Distributed Cache with Unique Features

The most frequent use-case for a distributed cache is to store objects identified by one or more unique keys. A database contains the persistent data and, when an object is accessed, we first try to get it from the cache and, if not available, load it from the database. Most usually, if the object is loaded from the database, it is also stored in the cache for later use. ... By using this simple algorithm, the cache is progressively filled with data and its “hit ratio” improves over time. This cache usage is usually associated with an “eviction policy” to avoid excessive memory consumption. When a threshold is reached (either in terms of memory usage or object count), some of the objects from the cache are removed. The most frequently used eviction policy is “Least Recently Used” abbreviated LRU. In this case, every time an object is accessed in the cache, its associated timestamp is updated. When eviction is triggered, we remove the objects with the oldest timestamp. Using cachalot as a distributed cache of this type is very easy.


Enterprise Architecture: A Blueprint for Digital Transformation


Enterprise architects have a tough job. They have to think strategically but act tactically. A successful enterprise architect can sit down at the boardroom table and discuss where the business needs to go, then translate “business speak” into technical capabilities on the back end. The key to EA is to always focus on business needs first, then how those needs can be met by applying technology. It comes down to the concept of IQ (intelligence quotient or “raw intelligence”) and EQ (emotional quotient, or “emotional intelligence”). As a recent Forbes article stated, “when it comes to success in business, EQ eats IQ for breakfast.” Good enterprise architects need to have good IQ and EQ. This balance prevents pursuing the latest technology just because it’s cool but instead determining what’s the best way to meet the business need. At the end of the day, an EA should be measured by the business outcomes it’s delivering. Our approach to EA (see below) starts with a business outcome statement, and ends with governance processes to verify we’re achieving those business outcomes and adhering to the EA blueprint.


Crisis Resilience for the Board


Similar to culture oversight, boards are increasingly monitoring company technology activities, from cyber risk to disruption risk to digital transformation. Directors are asking management tough questions about technologies that are vital to the business and whether they are truly protected from the most likely and impactful risks. Beyond protecting data, the board should understand whether management is incorporating resilience into their information technology and cybersecurity strategies. To do so, directors may seek to understand how the most critical data—or that which is most vital to the business’s success—is backed up and protected, both physically and logically. Directors should understand, at a high level, what the most critical data asset sets or capabilities are to the company and the risks posed to them. Additionally, directors should ask management whether it is considering innovative technologies to both protect assets and enable quick recovery in the event of potential loss. ... Directors might also endeavor to learn about leading practices around risk management, crisis management, cyber risk, physical security, succession planning, and culture risk. This could provide a level of comfort with the risks posed to the company, as well as a degree of confidence in the company’s ability to respond.


The Cybersecurity 202: This is the biggest problem with cybersecurity research


“There are a whole lot of possible barriers that will come to the fore if an organization asks their lawyers about it,” Moore said. “It turns out that many of those risks, on deeper inspection, can be mitigated and overcome. But there has to be institutional will to do it.” One irony of this problem is that the cybersecurity community has been hyper-focused on information sharing in recent years — but the focus has been on companies sharing hacking threats from the past day or two so they can guard against them. The government has championed these threat-sharing operations and facilitates them through a set of organizations called information sharing and analysis centers and information sharing and analysis organizations. That sort of sharing has a clear benefit for companies because it helps them defend against threats that may be coming in the next hour or day. But companies have made less progress on sharing longer-range cybersecurity information that can help address more fundamental cybersecurity challenges, Moore said.


The Connection Between Strategy And Enterprise Architecture (Part 3)

Business capabilities connect the business model with the enterprise architecture, which is composed of the organizational structure, processes, and resources that execute the business model. It is a combination of resources, processes, values, technology solutions, and other assets that are used to implement the strategy. ... business capabilities comprise a fundamental building block that enables and supports the business transformation initiatives companies are undertaking to remain relevant in the constantly changing marketplace. Companies that excel in mapping their existing capabilities and creating a road map to close the gap in their future capabilities are most likely to remain ahead of the competition by responding effectively to industry and market dynamics. Therefore, the way we connect the company’s high-level strategic priorities and objectives to the resources, processes, and ultimately the system landscape that execute the strategy is by mapping and modeling the necessary capabilities.


Leading Innovation = Managing Uncertainty

McKinsey Quarterly (2019) - Three Horizons Framework
Uncertainty is the central characteristic of innovation. While generating new ideas and inventing new technologies is important, it is even more important for innovators to identify the unknowns that have to be true for their ideas and technologies to succeed in the market. We can only claim to have succeeded at innovation when we find the right business model to profitably take our idea or technology to market. At the strategy level, there are several frameworks that have been developed to help leaders understand their product and service portfolios and make decisions. These frameworks use different dimensions that hide in plain sight, the real challenge that leaders are facing - i.e. managing uncertainty. ... The McKinsey framework is perhaps the most popular of them all. This framework maps two dimensions of value and time to create three horizons. The nearest horizon is Horizon 1 where we extend the core and generate value for the company straight away. In Horizon 2, we build businesses around new opportunities with potential to impact revenues in the near term. The farthest horizon is Horizon 3 where visionaries work on viable options that will only deliver value to the company after several years.


Cloud Security Architectures: Lifting the Fog from the Cloud

The user behavior analytics (UBA) security solutions oriented primarily to the insider threat have matured and are commonplace. ... This data is crucial for forensics analyses when a major breach has been detected. There isn’t a comparable mature technology yet for the cloud where users have migrated their work. The successful UBA technology teaches security professionals how to properly architect new enterprise systems with users’ cloud behaviors in full view. The core approach for the cloud is to gather data from cloud storage logs, extract features and carefully architect sets of indicators that detect likely breaches. Cloud logs can reveal, for example, when file extensions are changed, what documents are downloaded and to where, whether a document has been downloaded to an anonymous user, and when an unusually high number of documents are downloaded to an odd geolocation. These are all early indicators of potential breach activity.



Quote for the day:


"Leaders need to be optimists. Their vision is beyond the present." -- Rudy Giuliani


Daily Tech Digest - April 20, 2019

How to reconstruct your business’s value chain for the digital world

How to reconstruct your business̢۪s value chain for the digital world
What’s the big advantage of digital? It allows you to disconnect yourself from physical constraints. With uber, you no longer have to be in the street to hail a cab. You can order a cab from anywhere. If you digitize the supply-chain process, you are no longer linking the production of the product to one physical location. In the analog world, a person would check the inventory and write an order for supplies. When there was a spike in demand, that person would call more people and write more orders for more supplies. But in the digital world, you can create a manufacturing process where your inventory, recipes, and prices are all available on a digitized, harmonized ecosystem. When demand spikes, you can turn the dial on your [robotic process automation] RPA tool. When we digitize and harmonize complex business processes, we no longer have to call a guy who orders a part. Instead, you have a view into the inventory across multiple suppliers. The CIO has a unique and critical role in digital transformation, as long as they don’t fall into a few common traps. One such trap is when the CEO throws money at you and tells you, “Bring me this shiny new technology.”


Why Enterprise-Grade Cybersecurity Needs a Federated Architecture

A federated architecture combines the strengths of centralized and distributed and is, therefore, a kind of “best of both worlds” approach. With federated, a controller is placed in each data center or public cloud region (just like distributed), but those multiple controllers act in concert so as to provide the abstraction that there is one centralized controller. All of the controllers in a federated architecture communicate with each other to share information about the organization’s security policy as well as the workloads that are being secured. This type of architecture is the best when it comes to securing global infrastructure at scale. And, as is typically the case when writing enterprise-grade software, making the right architectural choice and then implementing it in an elegant way required our architects and engineers to spend a little more time and be a little more thoughtful. Our ultimate goal was to deliver an enterprise-scale architecture that delivered the benefits of a federated architecture without the downsides of distributed and centralized.


Ready for 6G? How AI will shape the network of the future


Take the problem of coordinating self-driving vehicles through a major city. That’s a significant challenge, given that some 2.7 million vehicles enter a city like New York every day. The self-driving vehicles of the future will need to be aware of their location, their environment and how it is changing, and other road users such as cyclists, pedestrians, and other self-driving vehicles. They will need to negotiate passage through junctions and optimize their route in a way that minimizes journey times. That’s a significant computational challenge. It will require cars to rapidly create on-the-fly networks, for example, as they approach a specific junction—and then abandon them almost instantly. At the same time, they will be part of broader networks calculating routes and journey times and so on. “Interactions will therefore be necessary in vast amounts, to solve large distributed problems where massive connectivity, large data volumes and ultra low-latency beyond those to be offered by 5G networks will be essential,” say Stoica and Abreu.


IT Governance 101: IT Governance for Dummies, Part 2

One of the powerful aspects of COBIT is that it acts as the glue between governance and management, describing both governance and management processes. Its concept of cascading enterprise goals to IT goals to enabler goals and metrics ensures consistent communication and alignment. These enablers such as Processes are where all the IT management frameworks can be plugged in, helping to give the frameworks a business context and ensuring that they focus on delivering value and outcomes, not just outputs. As stated by one expert in the UAE, “I think often because organizations do not do a goals cascade things feel disconnected and orphaned, but once you do a proper goals cascade you can see and feel the interconnection and how goals are interdependent on each other to achieve the enterprise-level goals. ... Clearly, these exploding business demands for new benefits exist and, at the same time, IT is expected to make everything secure, replace all that legacy stuff that is slowing down the Ubering, and stop IT from breaking as well.


Some internet outages predicted for the coming month as '768k Day' approaches

World map globe cyber internet
The good news is that network admins have known about 768k Day for a long time, and many have already prepared, either by replacing old routers with new gear or by making firmware tweaks to allow devices to handle global BGP routing tables that exceed even 768,000 routes. "Yes, TCAM memory settings can be adjusted to help mitigate, and even go beyond 768k routes on some platforms, which will work if you don't run IPv6. These setting changes require a reboot to take effect," Troutman said. "The 768k IPv4 route limit is only a problem if you are taking ALL routes. If you discard or don't accept /24 routes, that eliminates half the total BGP table size. "The organizations that are running older equipment should know this already, and have the configurations in place to limit installed prefixes. It is not difficult," Troutman added. "I have a telco ILEC client that is still running their network quite nicely on old Cisco 6509 SUP-720 gear, and I am familiar with others, too," he said.


Bots Are Coming! Approaches for Testing Conversational Interfaces

When testing such interfaces, natural language is the input and we humans really love having alternatives and love our synonyms and our expressions. Testing in this context moves from pure logic to something close to fuzzy logic and clouds of probabilities. As they are intended to provide a natural interaction, testing conversational interfaces also requires a great deal of empathy and understanding of the human society and ways of interacting. In this area, I would include cultural aspects, including paraverbal aspects of speech (that is all communication happening beside the spoken message, encoded in voice modulation and level). These elements provide an additional level of complexity and many times the person doing the testing work needs to consider such aspects. I believe it’s fair to say that testing a conversational interface can be also be seen as tuning, so that it passes a Turing test. Another challenge faced when testing such interfaces is the distributed architecture of systems.


Protecting smart cities and smart people

spinning globe smart city iot skyscrapers city scape internet digital transformation
For as long as most can remember, information security was a technology concern, handled by technologists, and discussed by security engineers and associated professionals. The security vendors presented at security conferences, the security professional attended accordingly, Cat people with cat people. You know how it goes. Within a Smart city eco- system, we need to extend the cyber conversation beyond the traditional players. How do we make the City Planner appreciate what we understand? How do we share and apply security best practices to an engineering company providing a Building Information Modelling (BIM) service to a Hospital or Defence project? Moreover, how do we, in the first instance highlight the security concerns? Attending and speaking at numerous cyber conferences I sometimes wonder, is this the right audience? In this digital eco-system, we should be speaking to civic and government leaders about our security concerns facing smart cities and critical infrastructure, not exclusively to other security professionals. They are well aware of the challenges and the resistance experienced.


Don't underestimate the power of the fintech revolution

According to Bank of England Governor Mark Carney, FinTech’s potential is to unbundle banking into its core functions - such as settling payments and allocating capital. For central bankers and regulators who are monitoring the sector, the growth of fintech is akin to any other disruptive technology - that is, will it lead to financial instability? Most fintech start-ups are not regulated as much as traditional financial institutions. So far, it’s the more open financial markets that have seen fintech develop rapidly. One example is the e-payment system M-Pesa, which operates in Kenya, Tanzania and elsewhere, and is one of the biggest fintech success stories since its emergence just a decade ago. By effectively transforming mobile phones into payment accounts, M-Pesa has increased financial access for previously unbanked people. The permissive stance of the Kenyan central bank allowed the sector to develop rapidly in one of East Africa’s most developed economies.


Data Breaches in Healthcare Affect More Than Patient Data

Data Breaches in Healthcare Affect More Than Patient Data
Cybercriminals go after any data they perceive to be valuable, says Rebecca Herold, president of Simbus, a privacy and cloud security services firm, and CEO of The Privacy Professor consultancy. "Payroll data contains a wide range of really valuable data that cybercrooks can sell to other crooks for high amounts," she says. "With the growing number of pathways into healthcare systems and networks ... that are being established through employee-owned devices, through third parties/BAs, and through IoT devices, I believe that such fraud is increasing because of the many more opportunities that crooks have now to commit these types of crimes." The recent attacks on Blue Cross of Idaho and Palmetto Health spotlight the importance for healthcare entities to diligently safeguard all data, says former healthcare CISO Mark Johnson of the consultancy LBMC Information Security. The attacks "underscore for me that the healthcare industry needs to protect the entire environment, not just their large systems like the EMR," he says.


Why Your DevOps Is Not Effective: Common Conflicts in the Team

In the DNA of DevOps culture lies the principle of constant and continuous interaction as well as collaboration between different people and departments. The key reason for this is a much greater final efficiency and a much smaller time-to-market compared to the traditional approach. Proper implementation of DevOps shifts the focus from personal effectiveness to team efficiency. At the same time, due to automation and the widespread introduction of monitoring and testing, it is possible to track the occurrence of a problem at the early stages, as well as quickly find the causes of problems. Building the right culture in the organization is important, and it does not depend on DevOps directly: problems occur in all companies, but in an organization, with the right culture all the forces will be thrown at solving the problem and preventing it in the future, rather than searching for the guilty side and punishing.



Quote for the day:


"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor