Daily Tech Digest - July 31, 2018

How disaster recovery can serve as a strategic tool

life preserver - personal floatation device
“You can count on us” is a popular business mantra. But what does that mean exactly? Consider this thought experiment: You and a competitor are hit with the same incident, but one of you gets back up more quickly. Fast recovery will give you a competitive advantage, if you can pay the price. “The smaller your RTO and RPO values are, the more your applications will cost to run,” says Google Cloud in a how-to discussion of DR. Any solution should also be well tested. “Your customers expect your systems to be online 24x7,” says Scott Woodgate, director, Microsoft Azure, in this press release. ... A solid DR plan can also facilitate transformational-based efficiencies. Let's say your leadership has business reasons for migrating to a new data center or transitioning to a hybrid cloud. Part of planning a migration is prepping for user experience and systems being down. If you are willing to use your DR assets during the transition, once the cloud or new physical sites are ready, you can fail back from DR, thus minimizing disruption. As an IT pro, you may not want to define these events as disasters, but business leaders prefer using existing resources to investing in swing gear.

The cybersecurity incident response team: the new vital business team

We live and do business in a world fraught with cyber risks. Every day, companies and consumers are targeted with attacks of varying sophistication, and it has become increasingly apparent that everyone is considered fair game. Organisations of all sizes and industries are falling victim, and the cyber risk is quickly becoming one of the most prevalent threats. When disruptions do occur from cyberattacks or other data incidents they not only have a direct financial impact, but an ongoing effect on reputation. For example, Carphone Warehouse fell victim to a cyberattack in 2015, which resulted in the compromising of data belonging to more than three million customers and 1,000 employees. While it suffered financial losses from the remedial costs, which included a £400,000 fine from the Information Commissioner’s Office (ICO), it also led to consumers questioning whether their data was truly secure with the retailer and if it was simply safer to shop elsewhere. That loss in consumer confidence is incredibly difficult to claw back, particularly at a time when grievances can be aired on social media and be shared hundreds or thousands of times.

Managing IoT resources with access control

The first place to start in establishing an effective IoT security strategy is by ensuring that you are able to see and track every device on the network. Issues from patching to monitoring to quarantining all start with establishing visibility from the moment a device touches the network. Access control technologies need to be able to automatically recognize IoT devices, determine if they have been compromised and then provide controlled access based on factors such as the type of device, whether or not it is user-based and, if so, the role of the user. And they need to be able to do this at digital speeds. Another access control factor to consider is location. Access control devices need to be able to determine whether an IoT device is connecting remotely and, if not, where in the network it is logging in from. Different access may be required depending on whether a device is connecting remotely, or even from the lobby, a conference room, a secured lab or a warehouse facility. Location-based access policies are especially relevant for organizations with branch offices or an SD-WAN system in place.

Artificial intelligence: Why a digital base is critical

The adoption of AI, we found, is part of a continuum, the latest stage of investment beyond core and advanced digital technologies. To understand the relationship between a company’s digital capabilities and its ability to deploy the new tools, we looked at the specific technologies at the heart of AI. Our model tested the extent to which underlying clusters of core digital technologies (cloud computing, mobile, and the web) and of more advanced technologies (big data and advanced analytics) affected the likelihood that a company would adopt AI. As Exhibit 1 shows, companies with a strong base in these core areas were statistically more likely to have adopted each of the AI tools—about 30 percent more likely when the two clusters of technologies are combined.5These companies presumably were better able to integrate AI with existing digital technologies, and that gave them a head start. This result is in keeping with what we have learned from our survey work. Seventy-five percent of the companies that adopted AI depended on knowledge gained from applying and mastering existing digital capabilities to do so.

The 5 Clustering Algorithms Data Scientists Need to Know

Clustering is a Machine Learning technique that involves the grouping of data points. Given a set of data points, we can use a clustering algorithm to classify each data point into a specific group. In theory, data points that are in the same group should have similar properties and/or features, while data points in different groups should have highly dissimilar properties and/or features. Clustering is a method of unsupervised learning and is a common technique for statistical data analysis used in many fields. In Data Science, we can use clustering analysis to gain some valuable insights from our data by seeing what groups the data points fall into when we apply a clustering algorithm. Today, we’re going to look at 5 popular clustering algorithms that data scientists need to know and their pros and cons! K-Means is probably the most well know clustering algorithm. It’s taught in a lot of introductory data science and machine learning classes. It’s easy to understand and implement in code! Check out the graphic below for an illustration.

Ransomware Attack Leads to Discovery of Lots More Malware

The investigation concluded the unauthorized persons would have had the ability to access all of the Blue Springs computer systems, the clinic notes. "However, at this time, we have not received any indication that the information has been used by an unauthorized individual." The U.S. Department of Health and Human Service's HIPAA Breach Reporting Tool website, or "wall of shame," indicates that Blue Spring on July 10 reported the breach as a hacking/IT incident involving its electronic medical records and network server that exposed data on nearly 45,000 individuals. Blue Spring's front desk receptionist, who did not want to be identified by name, told Information Security Media Group Friday that the investigation into the ransomware attack had not yet determined the source of the ransomware attack, the source of the other malware discovered, whether the other malware might have been present on the practice's systems before the ransomware attack, or whether the infections were all part of the same attack. She said the practice chose to "rebuild" its systems and did not pay a ransom.

CIOs reveal their security philosophies

 CIOs reveal their security philosophies
“Overly strict security creates a different risk — throttling information exchange and creativity can threaten a company’s competitive viability,” Johnson adds. “Poorly managed reactions to breaches — and all firms have been breached in some way — can lead to other business deterioration.” “Security is as much a human challenge as it is a technical challenge,” he concludes. “Dependable cybersecurity requires a three-part strategy of 1) superb technical implementation of the basics, 2) consistent education aimed at increasing awareness of employees, vendors, and executives, and 3) building a security team that is as motivated, skilled, and innovative as the bad guys.” In this edition of Transformation Nation, CIOs delineate their own IT security philosophies — dispatches from the front lines of cybersecurity strategy. The implications of a breach for corporate reputation, economic well-being, and personal security are immense. Through these accounts, CIOs reveal the many tension points in application and communication that they grapple with every day

GDPR means it is time to revisit your email marketing strategies

No matter how private you think your emails are, every email you send and receive is stored on a remote hard drive you have no control over. If your email provider doesn’t encrypt your emails from end-to-end, (most don’t), all company emails are at risk. Encrypting employee email communications plays a huge role in maintaining GDPR compliance. The average employee won’t think twice about emailing co-workers about sensitive issues that may include data from the business database. For example, someone might send a customer’s credit card information to the sales department for processing a return. To protect your internal emails and maintain GDPR compliance, buying general encryption services isn’t enough. You need to know exactly how and when the data is and isn’t being encrypted. Not all encryption services are complete. For instance, if you’re using Microsoft 365, you’ve probably heard of a data protection product called Azure RMS. This product uses TLS security to encrypt email messages the moment they leave a user’s device. Unfortunately, when the messages reach Microsoft’s servers, they are stored unprotected. 

Google, Cisco amp-up enterprise cloud integration

hybrid cloud
The Cisco/Google combination – which is currently being tested by an early access enterprise customer, according to Google – will let IT managers and application developers use Cisco tools to manage their on-premises environments and link it up with Google’s public IaaS cloud which offers orchestration, security and ties to a vast developer community. In fact the developer community is one area the companies have targeted recently by announcing a Cisco & Google Cloud Challenge, which is offering prizes worth over $160,000 to develop what Cisco calls “game-changing” apps using Cisco’s Container Platform with Google Cloud services. Cisco says the goal is to bring together its DevNet community and Google’s Technology Partners to bring new hybrid-cloud applications for enterprise customers. Cisco VP & CTO of DevNet Susie Wee wrote in a blog that in preparation for the Challenge, DevNet is offering workshops, office hours, and sandboxes using Cisco Container Platform with Google Cloud services to help customers and developers learn how to connect cloud data from a private cloud to the Google Cloud Platform or even data from edge devices to run analytics and employ machine learning.

Why 'Sophisticated' Leadership Matters -- Especially Now

When challenged by complexity, many leaders try to implement best practices such as lean management, restructuring or re-engineering. Such investments may indeed be necessary, but they are rarely sufficient. This is because the root cause of most stalls is that the leader has run up against the limits of his or her leadership sophistication. In other words, the leader is failing to reinvent him- or herself as the new kind of leader the organization now needs. This usually means that the leader doesn’t fully appreciate that intelligence, hard work and technical knowledge must now take a back seat to enhanced personal, interpersonal, political and strategic leadership capabilities. In other words, you will stall not because the complex challenges you face require changes in your organization. But rather because the sophisticated challenges require change in yourself. So how can you become a more sophisticated leader? Try pulling back, elevating your viewpoint and figuring out how you can take yourself to the next level.

Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley

Daily Tech Digest - July 30, 2018

        Chamber of Digital Commerce Sets Out ICO and Token Guidelines
Former Securities and Exchange Commission (SEC) commissioner and CEO of Patomak Global Partners Paul Atkins comments, “These principles are an important tool for responsible growth and smart regulation that strikes the right balance between protecting investors while allowing for innovation in this new technological frontier. We think it is important to explain the unique attributes of blockchain-based digital assets, which are not all strictly investment based, and provide guidance to consumers, regulators and the industry.” The whitepaper is broken up into three distinct sections. The first offers a comprehensive overview of current and future regulations to give investors a stronger understanding of securities laws in the U.S., Canada, the U.K. and Australia. The second part showcases industry-developed principles for both trading platforms and token sponsors to better promote safe and legal business practices and lower the risks to organizers and traders. 

The Toughbook T1 has a 5-inch screen and runs Android 8.1 Oreo. It allows retail workers, warehouse employees, or transportation and logistics employees to quickly scan barcodes for better productivity. The device also has a built-in barcode reader and high-speed connectivity to integrate with resource management systems and databases. The FZ-T1 is available in two models—one with Wi-Fi connectivity only, and another offering voice and data connection on AT&T and Verizon networks, as well as data connectivity through P.180, Panasonic's purpose-built network. The Toughbook L1 is a professional-grade tablet that can be mounted in a vehicle or used as a handheld device. It has a 7-inch screen and runs Android 8.1 Oreo. It includes an integrated barcode reader that is field-configurable for landscape or portrait modes. The L1 will be released in a Wi-Fi only model that supports data service on Verizon, AT&T and Panasonic's P. 180.

IBM banks on the the blockchain to boost financial services innovation

On Monday, the tech giant said a proof-of-concept (PoC) design has been created for the platform, dubbed LedgerConnect. The system is a distributed ledger technology (DLT) platform intended for enterprise financial services companies including banks, buy and sell-side firms, FinTechs and software vendors. The goal for LedgerConnect is to bring these companies together to deploy, share, and use blockchain-based services hosted on the network in order to make adoption more cost effective for companies, as well as easier to access and to deploy. Services will include Know Your Customer (KYC) processes, sanctions screening, collateral management, derivatives post-trade processing and reconciliation and market data. "By hosting these services on a single, enterprise-grade network, organizations can focus on business objectives rather than application development, enabling them to realize operational efficiencies and cost savings across asset classes," IBM says.

Don’t Let your Data Lake become a Data Swamp

To cope with the growing volume and complexity of data and alleviate IT pressure, some are migrating to the cloud. But this transition—in turn—creates other issues. For example, once data is made more broadly available via the cloud, more employees want access to that information. Growing numbers and varieties of business roles are looking to extract value from increasingly diverse data sets, faster than ever—putting pressure on IT organizations to deliver real-time, data access that serves the diverse needs of business users looking to apply real-time analytics to their everyday jobs. However, it’s not just about better analytics—business users also frequently want tools that allow them to prepare, share, and manage data.To minimize tension and friction between IT and business departments, moving raw data to one place where everybody can access it sounded like a good move. The concept of the data lake first coined by James Dixon in 2014 expected the data lake to be a large body of raw data in a more natural state where different users come to examine it, delve into it, or extract samples from it.

3 Ways Automation & Integration Is Disrupting the HIT Status Quo

Integrated patient engagement solutions empower patients along the continuum of their healthcare experience, pre-visit to post-visit, with features such as self-scheduling, online access to consent forms and personal information, and communications with their providers via a user-friendly patient portal. And by engaging patients with this end-to-end lifecycle approach, practices can increase patient satisfaction rates, patient retention and referrals. ... “We wanted something that was easy to use for the patients and staff, straightforward, less expensive than our current solution, available to all our providers, and that would offer greater transparency to patients, particularly on which insurances we take,” notes Jared Boundy, MHA, director of operations for Washington-based Dermatology Arts. “We also felt that it needed to integrate with the other systems we already had in place. It had to be adaptable, too, as we didn’t want to pay an arm and a leg every time we added a provider or a location.”

AI Software Development: 7 things you need to Know

AI in Software Development
At the initial stage, Machine learning needs substantial computing resources, meanwhile, the data processing stage is not so challenging. Previously, this varying requirement in computing resources was difficult for those who wanted to implement machine learning but were unwilling to make big one-time investments to purchase servers that were adequately powerful. As the cloud technology emerged, the possibility of satisfying this requirement became easy. AI software development services can rely on either the corporate or commercial cloud, for e.g. Microsoft or AWS etc. ... As artificial intelligence techniques become mature, more are interested in using these practices to control complex real-world systems that have solid deadlines. ...  AI is a huge field and with a wide area to cover, it is difficult to refer just one single programming language. Of course, there are a variety of programming languages that can be used but not all offer the best value for your effort. These languages are considered to be best options for AI considering their simplicity, prototyping capabilities, usefulness, usability and speed – they are Python, Java, Lisp, Prolog, C++ etc.

Connecting whilst building – benefits of the IoT in construction

Connecting whilst building – benefits of the IoT in construction image
While IIoT opens the door to a host of new opportunities such as cost reduction, worker safety, quality improvement and business growth, the prospect of gearing up for the next industrial revolution can cause apprehension. Implementing IIoT solutions can change the way IT interacts with production systems and field devices but if this is matched with the right approach to connectivity, and realising the potential of the servitisation model, it needn’t keep construction companies awake at night. Connectivity is the lifeblood of the IoT and this is just as true in an industrial setting. Field connectivity is indispensable for conveying commands to field systems and devices in addition to acquiring data for further analysis. It tends to be a cross-cutting and cross-layer function in IIoT systems as both edge and cloud modules are able to access field data directly using one of a large number of protocols. These include OPC-UA (Unified Architecture, MQTT (Message Queue Telemetry Transport), DDS (Data Distribution Service), oneM2M and various other protocols as illustrated in the Industrial Internet Connectivity Framework.

Pushing the Boundaries of Computer Vision

Although augmented reality has occasionally been described as a bridge to true virtual reality, AR is actually more difficult to implement in some ways. Nevertheless, the technology has evolved rapidly in recent years, thanks in part to computer vision advances. At the core of AR is a challenge relevant to other fields of computer vision: Object recognition. Small variations in objects can prove challenging for imagine recognition software, and even a change in lighting can cause mismatches. Experts at Facebook and other companies have made tremendous progress through deep learning and other artificial intelligence fields, and these advances have the potential to make AR and other vision fields dependent on object recognition more powerful in the coming years. Another transformative use-case is predicted to be agriculture. Agricultural science is charged with feeding the world, and computers have been making major strides in the field in recent years. Because farms are so large and often remote, image recognition enables individual farmers to be far more effective. Computer vision capable of detecting fruit can help farmers track progress and determine the right time for harvest.

Monitoring Your Data Center Like a Google SRE

Monitoring Your Data Center
SLO is used to define what SREs call the “error budget,” which is a numeric line in the sand. The error budget is used to encourage collective ownership of service availability and blamelessly resolve disputes about balancing risk and stability. For example, if programmers are releasing risky new features too frequently and compromising availability, this will deplete the error budget. SREs can point to the at-risk error budget, and argue for halting releases and refocusing coders on efforts to improve system resilience. This approach lets the organization as a whole balance speed and risk with stability effectively. Paying attention to this economy encourages investment in strategies that accelerate the business while minimizing risk: writing error- and chaos-tolerant apps, automating away pointless toil, advancing by means of small changes and evaluating “canary” deployments before proceeding with full releases. Monitoring systems are key to making this whole, elegant tranche of DevOps/SRE discipline work. It’s important to note that this has nothing to do with what kind of technologies you’re monitoring, the processes you’re wrangling or the specific techniques you might apply to stay above your SLOs.

Utilize microservices to support a 5G network architecture

Microservices is the ideal cloud-based architecture for 5G, rather than a monolithic architecture. Only microservices can properly support a 5G network architecture, because no set of monolithic applications can deliver the same requirements of responsiveness, flexibility, updatability and scalability that 5G demands. Virtualized network services also must adapt to new technologies and demands on the system as they come along. With a microservices-based architecture, this is a relatively easy task, accomplished via changes to individual microservices rather than the whole system. The technologies included in 5G will likely change rapidly after the initial rollout, so this kind of adaptability is a necessity. Additionally, signal-related expectations of 5G, such as high availability, require the kind of flexibility that microservices can deliver. According to NGMN, remote-location equipment should be self-healing, which means it requires flexible, built-in, AI-baseddiagnostic and repair software capable of at least re-establishing lost communication when isolated.

Quote for the day:

"The People That Follow You Are A Reflection Of Your Leadership." -- Gordon TredGold

Daily Tech Digest - July 29, 2018

C# 8 Ranges and Recursive Patterns

Ranges easily define a sequence of data. It is a replacement for Enumerable.Range() except it defines the startand stop points rather than start and count and it helps you to write more readable code. ... Pattern matching is one of the powerful constructs, which is available in many functional programming languages like F#. Furthermore, pattern matching provides the ability to deconstruct matched objects, giving you access to parts of their data structures. C# offers a rich set of patterns that can be used for matching. Pattern matching was initially planned for C# 7, but after while .Net team has find that he need more time to finish this feature. For this reason, they have divided the task in two main parts. BasicPattern Matching, which is already delivered with C # 7, and the AdvancedPatternsMatching for C# 8. We have seen in C# 7 Const Pattern, Type Pattern, Var Pattern and the Discard Pattern. In the next C# 8 version, we will see more patterns like Recursive Pattern, which consist of multiple sub-patterns like the Positional Pattern, and Property Pattern.

There are also several issues to consider in terms of both PCI and HIPAA compliance when working with CSPs. Among them is the requirement that comes up during compliance audits regarding where your data resides and what protective measures are in place. With cloud services, that’s sometimes easier said than done. Many CSPs employ a network of data centers that work together to provide high availability and security of your data. As a result, the data may be moved to different data centers across large geographic spans based on service levels, resource demand, cost, latency, disaster recovery and business continuity needs. For security reasons, CSPs may be reluctant to divulge the location of their data centers or where data is specifically located at any one time. Things can become more complex in the case of global providers. With the European Union implementation of the General Data Protection Rules it is important to know where your data resides. Almost every business is touched by the impact of GDPR.

API Governance Models in the Public and Private Sector: Part 5

Beyond the technology, the legal department should have a significant influence over APIs going from development to production. Providing a structured framework that can generally apply across all services easily, but also providing granular level control over the fine-tuning of legal documents for specific services and use cases. With a handful of clear building blocks in use to help govern the delivery of APIs from the legal side of the equation. ... The legal department will play an important role in governing APIs as they move from development to production, and there needs to be clear guidance for all developers regarding what is required. Similar to the technical and business elements of delivering services, the legal components need to be easy to access, understand, and apply, but also make sure and protect the interests of everyone involved with the delivery and operation of enterprise services.

Data Analytics Is The New Co-Pilot For Every CIO

FICO_RIQ_Data Analytics_Business Intelligence
Measurements are important – you won’t be motivated to improve what you’re not measuring. Organizations must always understand their security posture in order to know how they quickly they can react to potential breach events. ... Compliance has become a much bigger deal over the last few years. Where organizations were once concerned about doing only as much as required to tick the box, they are now concerned about doing as much as they practically can to align to both the letter and spirit of the regulations.  ... Rely on the experts. There are software packages available that bring you coverage for multiple compliance regimes in a single application. It makes sense to leverage these pre-built tools, as most compliance regime requirements are similar and processing data multiple times to attain related outcomes is not efficient. ... Leverage analytics. As I stated above, ticking the box is no longer enough. You have to put best efforts into solving for the spirit of the regulation, and analytics can help you improve your ability to identify the conditions and incidents that the regulations are really getting after.

Here’s how to make AI inclusive

Governments around the world should prioritize preparing citizens for the proliferation of AI. Leaders should create a game plan that addresses the trajectory of job loss by asking difficult questions, including: should we slow down the evolution of technology to buy time to reskill the workforce? We know that certain jobs are going to become extinct. But can we minimize the impact by mapping out the “glide path” and helping prepare workers for new jobs before their current ones reach their natural conclusion? Think about AI like a car. There are two ways that the driver can reach a speed of 200 miles per hour. First, the driver can slam on the accelerator and go from 0 to 200 in matter of seconds. Or, the driver can control the acceleration by gradually applying pressure and monitoring the speedometer. The second scenario is much safer, of course. The same is true when it comes to monitoring the acceleration of AI. Governments should not try to stop its progression.

7 Ways IoT Is Changing Retail in 2018

With IoT, you can set up sensors around the store that send loyalty discounts to certain customers when they stand near products with their smartphones, if those customers sign up for a loyalty program in advance. Additionally, you can use IoT to track items a customer has been looking at online, and send that customer a personalized discount when she’s in-store. Imagine if your customer perused your purses online, and then, in-store, received a discount on her favorite purse? Rather than offering general discounts on a wide variety of products, you can tailor each discount using IoT to maximize your conversion rates. Ultimately, finding ways to incorporate IoT devices into your day-to-day business requires creativity and foresight, but the benefits of IoT in retail -- as outlined above -- can help your business discover innovative solutions to attract more valuable and loyal long-term customers.

The rise of autonomous systems will change the world

“It’s tough to make predictions, especially about the future”, as physicist Niels Bohr is quoted to have said. One thing for sure will change the world as we know it today, and this is the rise of autonomous systems. I would expect major progress in the synthesis of symbolic logic as for (explicit) knowledge representation in combination with (implicit) deep neural networks. This development will lead to autonomous systems that learn while interacting with their environment, that are able to generalize, to draw deductions, and to adapt to new, previously unknown situations. ... One of my favourite application areas is exploratory search, i.e. searching where you don’t know exactly where the search process might lead you to. Sometimes you might not be able to explicitly phrase your search intention. Probably because you lack the vocabulary or you might not be an expert in the domain in which you are looking for information. Then, first you have to gather information about your domain before you might be able to perform pinpoint retrieval.

AI and Jobs: What’s The Net Effect?

Among workers mostly engaged in non-routine tasks, one in five will soon be using AI to some extent, according to Gartner’s research. However, those excited about using AI systems at work may need to temper their excitement. AI-powered systems will likely help with mundane tasks such as creating a work schedule, or systems might be able to prioritize emails so employees can focus on the most important tasks. However, these systems are likely to evolve into virtual secretaries over time, and those using these systems will find them to be valuable time-saving tools covering an ever-increasing range of tasks. AI seems to perpetually be viewed as a future technology. Research from O’Reilly, however, shows that the foundation for AI-empowered companies already exists. A total of 28 percent of the leaders polled in the early 2018 survey report already using deep learning, which is viewed as perhaps the most important AI technology for typical businesses. Furthermore, 54 percent of respondents plan on using deep learning for future projects.

Understanding Software System Behaviour With ML and Time Series Data

Because they own the memory of the simulation, they have access to the complete state of the system. Theoretically, this means it is possible to analyse the data to try to reverse-engineer what is going on at a higher level of understanding, just from looking at the underlying data. Although this tactic can provide small insights, it is unlikely that only looking at the data will allow you to completely understand the higher level of Donkey Kong.  This analogy becomes really important when you are using only raw data to understand complex, dynamic, multiscale systems. Aggregating the raw data into a time-series view makes the problem more approachable. A good resource for this is the book Site Reliability Engineering,which can be read online for free. Understanding complex, dynamic, multiscale systems is especially important for an engineer who is on call. When a system goes down, he or she has to go in to discover what the system is actually doing. For this, the engineer needs both the raw data and the means to visualise it, as well as higher-level metrics that are able to summarise the data. 

The top security and risk management trends facing organizations

“Customer data is the lifeblood of ever-expanding digital business services. Incidents such as the recent Cambridge Analytica scandal or the Equifax breach illustrate the extreme business risks inherent to handling this data,” Gartner noted. “Moreover, the regulatory and legal environment is getting ever more complex, with Europe's GDPR the latest example. At the same time, the potential penalties for failing to protect data properly have increased exponentially.” In the U.S., the number of organizations that suffered data breaches due to hacking increased from under 100 in 2008 to over 600 in 2016. "It's no surprise that, as the value of data has increased, the number of breaches has risen too," said Firstbrook. "In this new reality, full data management programs — not just compliance — are essential, as is fully understanding the potential liabilities involved in handling data."

Quote for the day:

"Be a solution provider and not a part of the problem to be solved" -- Fela Durotoye

Daily Tech Digest - July 28, 2018

Trading has changed significantly with the initiation of computers. In the coming future, blockchain technology will not only exclude intermediates but also make the stock exchange decentralized, without a need for the central system to bring supply and demand together. Since blockchain is shared by all the associates, it is easy to prevent double-spending and verify who owns tokens at some particular point in time. They can be implemented in this sector by using digital currency like bitcoins that can be stored and carried in the form of cryptographic tokens. For instance, since bitcoins uses a peer-to-peer network to broadcast information about any transactions taking place, they can be added to blocks that are cryptographically secured, forming an immutable blockchain. Also, they can be tracked and ‘colored’ to distinguish and can be associated with the ownership of certain assets like stocks, bonds etc. In this way, many different assets can be transferred using the bitcoin blockchain, but there are also other cryptocurrency networks that are authorized for exchanging multiple assets, such as Ripple.

Data-Driven? Think again

When the analysis is complex or the data are hard to process, a pinch of tragedy finds its way into our comedy. Sometimes boiling everything down to arrive at that 4.2 number takes months of toil by a horde of data scientists and engineers. At the end of a grueling journey, the data science team triumphantly presents the result: it’s 4.2 out of 5! The math was done meticulously. The team worked nights and weekends to get it in on time. What do the stakeholders do with it? Yup, same as our previous 4.2: look at it through their confirmation bias goggles, with no effect on real-world actions. It doesn’t even matter that it’s accurate—nothing would be different if all those poor data scientists just made some numbers up. Using data like that to feel better about actions we’re going to take anyway is an expensive (and wasteful) hobby. Data scientist friends, if your organization suffers from this kind of decision-maker, then I suggest sticking to the most lightweight and simple analyses to save time and money. Until the decision-makers are better trained, your showy mathematical jiu jitsu is producing nothing but dissipated heat.

How Big Data Can Play An Essential Role In Fintech Evolution

fintech big data evolution
In the banking and fintech industry, like in many others, offering personalised services is one of the greatest marketing tools available. Fintech companies like Contis Group claim that more and more customers they have search for personalized and flexible fintech services and packages. The pressure to create personalized services in the industry is also driven by the increasing number of companies that adopt such strategies, thus where a keen competition is present. Alternative banking institutions began to use the services of fintech companies to improve their services and offer more personalized packages, but also a better, more comprehensive, faster infrastructure, which contributes to creating a more personalized and facile experience for the final consumer. Not only can fintech companies identify spending patterns to make banking recommendations, but they can also use those to help the final user save more money if this is one of their goals. 

DevOps for Data Scientists: Taming the Unicorn

Header image
Developers have their own chain of command (i.e. project managers) who want to get features out for their products as soon as possible. For data scientists, this would mean changing model structure and variables. They couldn’t care less what happens to the machinery. Smoke coming out of a data center? As long as they get their data to finish the end product, they couldn’t care less. On the other end of the spectrum is IT. Their job is to ensure that all the servers, networks and pretty firewall rules are maintained. Cybersecurity is also a huge concern for them. They couldn’t care less about the company’s clients, as long as the machines are working perfectly. DevOps is the middleman between developers and IT. ... Imagine pushing your code to production. And it works! Perfect. No complaints. Time goes on and you keep adding new features and keep developing it. However, one of these features introduce a bug to your code that badly messes up your production application. You were hoping one of your many unit tests may have caught it.  

The Democratization of Data Science

Once an organization is delivering the access and education needed to democratize data among its employees, it may be time to adjust roles and responsibilities. At a minimum, teams should be able to access and understand the data sets most relevant to their own functions. But by equipping more team members with basic coding skills, organizations can also expect non–data science teams to apply this knowledge to departmental problem solving — leading to greatly improved outcomes. If your workforce is data-literate, for example, your centralized data team can shift its focus from “doing everyone else’s data work” to “building the tools that enable everyone to do their data work faster.” Our own data team doesn’t run analyses every day. Instead, it builds new tools that everyone can use so that 50 projects can move forward as quickly as one project moved before.

Success With AI in Banking Hinges on the Human Factor

The one reason why banking operations aren’t relying on AI isn’t because of the unwillingness to adapt to change. Rather, the industry lacks the right talent to drive that change. There is a significant disconnect between the recognition of a need and an appropriate response. The Accenture research found that while executives believe that most of their employees are not ready to work with AI, only 3% of executives are planning to increase investments in retraining workers in the next three years. This is unfortunate since employees indicate that they are not only impatient to thrive in an intelligent enterprise that can disrupt markets and improve their working experience; they are also eager to acquire the new skills required to make this happen. “Banks’ lack of commitment to upskilling and reskilling employees to learn how to collaborate with intelligent technologies will significantly hinder their ability to deploy and benefit from them,” McIntyre explained.

columbus ohio (aceshot1/Shutterstock.com)
The Connected Vehicle Environment is expected to deliver situational awareness for traffic management and operations based on data from connected vehicle equipment installed in vehicles and on a select group of roadways and intersections where the technology can reduce the number of accidents and support truck platooning, which involves electronically linking groups of trucks to drive close to one another and accelerate or brake simultaneously.  The city will install 113 roadside units that will contain some or all of the following: a traffic signal controller, a Global Navigation Satellite System (GNSS) receiver to pinpoint locations, a wireless dedicated short-range communications (DSRC) radio and a message processing unit. Meanwhile, 1,800 onboard units will be installed in city fleet vehicles and volunteer citizen vehicles that will communicate with the roadside units and one another. The units will contain a GNSS receiver, a vehicle data bus, a DSRC radio, a processing unit, a power management system, software applications and a display.

IoT and data governance – is more necessarily better?

Organizations have realized that data is a strategic asset and a lot of them are trying to commoditize it. In the case of IoT, not all data is created equal. Simply hoarding data because it may be useful one day may create a much higher risk than making decisions about data that make sense for a specific organization. In the case of IoT, this has become a huge challenge because smart devices can gather unimaginable amounts of data. However, the fact that they can doesn’t mean that they should. I will not get into the details of risks around cybersecurity because that has been debated ad nauseam. I am interested in discussing the other side of the coin: business opportunities. What does having a clear strategy for the collection and use of data gathered from IoT devices mean in terms of revenue and profitability? How can data governance help achieve that goal? Data governance is the framework under which data is managed within an organization to ensure the appropriate collection (the “what to use”), processing (the “how to use”), retention (the “until when to use”) and relevance (the “why to use”) of data.

Raspberry Pi gets supercharged for AI by Google's Edge TPU Accelerator

Machine-learning models will still need to be trained using powerful machines or cloud-based infrastructure, but the Edge will accelerate the rate at which these trained models can run and be used to infer information from data, for example, to spot a specific make of car in a video or to perform speech recognition. While AI-related tasks like image recognition used to be run in the cloud, Google is pushing for machine-learning models to also be run locally on low-power devices such as the Pi. In recent years Google has released both vision and voice-recognition kits for single-board computers under its AIY Projects program. Trained machine-learning models available to run on these kits include face/dog/cat/human detectors and a general-purpose image classifier. Google is also releasing a standalone board that includes the Edge TPU co-processor and that bears a close resemblance to the Raspberry Pi. The credit-card sized Edge TPU Dev Board is actually smaller than Pi, measuring 40x48mm, but like the Pi packs a 40-pin expansion header that can be used to wire it up to homemade electronics.

Data Analytics or Data Visualizations? Why You Need Both

Data Security
Depending upon the level of detail that stakeholders need to draw actionable conclusions, as well as the need to interact with or drill-down into the data, traditional data analytics might not be sufficient for businesses to excel in today’s competitive marketplace. Additional tools are needed to help extract more timely, more nuanced, and more interactive insights than data analysis alone can provide. Those tools are data visualization tools. The reason data analytics is limited might be simple enough. Data analytics helps businesses understand the data they have collected. More precisely, it helps them become cognizant of the performance metrics within the collected data that are most impactful to the business. And it can provide a clearer picture of the business conditions that are of greatest concern to decision-makers. But analytics does not do what data visualization can do: help to communicate and explain that picture with precision and brevity while in a format that the brain consumes exceedingly quickly. The data itself isn’t changed by data viz; further analysis isn’t done. But two-dimensional tables of data are not very amenable to learning; the mind tends to gloss over a large amount of it, scan for highest and lowest values, and miss the details in between.

Quote for the day:

"Tomorrow's leaders will not lead dictating from the front, nor pushing from the back. They will lead from the centre - from the heart" -- Rasheed Ogunlaru

Daily Tech Digest - July 27, 2018

Mastering Spring framework 5, Part 1: Spring MVC

metal spring
Spring MVC is the Spring framework's traditional library for building Java web applications. It is one of the most popular web frameworks for building fully functional Java web applications and RESTful web services. In this tutorial, you'll get an overview of Spring MVC and learn how to build Java web applications using Spring Boot, Spring Initializr, and Thymeleaf. We'll fastrack our Spring MVC web application with the help of Spring Boot and Spring Initializr. Given input for the type of application to be built, Spring Initializr uses the most common dependencies and defaults to setup and configure a basic Spring Boot application. You can also add custom dependencies and Spring Initializr will include and manage them, ensuring version compatibility with both third-party software and Spring. Spring Boot applications run standalone, without requiring you to provide a runtime environment. In this case, since we're building a web application, Spring Boot will automatically include and configure Tomcat as part of the app's runtime. We can also customize the app by adding an H2 database driver to our Maven POM file.

5 Keys to Creating a Data Driven Culture

With business reconstructing their entire model to accommodate the need for digital change, one does think about what is causing this disruption. The need for a digital change starts with data. Data has become the need of the hour, and to perfectly manage and extract it, organizations need to go where the customers are: digital. Data is being generated by customers in the digital world, and organizations are willing to incorporate this digital change in a bid to get hold of this data. IoT devices and smartphones are playing an important role in data generation, curating data important to all organizations. Customers are not the only ones generating this data. From smart city technologies such as connected cars, trains, and video surveillance, to businesses themselves, data is generated at a meteoric rate. The digital interactions that every business has with their customers is one of the major sources of data, and businesses often ponder how they could use these data sources to reach meaningful insights that help them in real time.

New NetSpectre Attack Can Steal CPU Secrets via Network Connections

Although the attack is innovative, NetSpectre also has its downsides (or positive side, depending on what part of the academics/users barricade you are). The biggest is the attack's woefully slow exfiltration speed, which is 15 bits/hour for attacks carried out via a network connection and targeting data stored in the CPU's cache. Academics achieved higher exfiltration speeds —of up to 60 bits/hour— with a variation of NetSpectre that targeted data processed via a CPU's AVX2 module, specific to Intel CPUs. Nonetheless, both NetSpectre variations are too slow to be considered valuable for an attacker. This makes NetSpectre just a theoretical threat, and not something that users and companies should be planning for with immediate urgency. But as we've seen in the past with Rowhammer attacks, as academics spend more time probing a topic, exfiltration speeds will also eventually go up, while the technical limitations that prevent such attack from working will slowly go down and dissipate.

Embracing RPA - Opportunities and Challenges for Accountancy Profession

Once IT and security risks are satisfied with the IT architecture, the process is documented in detail and can be carried forward for implementation. Key sectors where RPA is playing a significant role in bringing in process efficiencies include highly regulated verticals such as, healthcare, banking, financial services and insurance. Other major sectors include telecommunications, utilities, mining, travel and retail. ... Business users of the organisation review the work of the robots and resolves any exception and escalates, if required, to identify stakeholder for resolution. In a long run, the bots can be self-learning to go the level of RPA for decision making. RPA is believed to revolutionize and redefine the way we will work and make us more smart and quick in processes, RPA as we see have commenced deployment in most large business and, will continue to grow and will adopt to be cognitive by next five years. Further, it is predicted by many that that is shall develop to machine learning platform probably by year 2025-2026

Containers Provide the Key to Simpler, Scalable, More Reliable App Development

Kubernetes originally came out of Google, and it’s basically an orchestration layer around containers. For example, if I’m writing a containerized application, I can run it on top of Kubernetes, and Kubernetes will handle a lot of the underlying infrastructure orchestration—specifically, things like scaling up to meet demand or scaling down when demand is light. If servers crash, it will spin up more. The application developer simply says, “Hey, here are my containers. This is what they look like. Run them,” and then Kubernetes manages and orchestrates all of the underlying capacity. Kubernetes works whether you’re developing an application for three people or a global enterprise. What you’re doing is applying good architectural structure around a large-scale application whether you need it or not. So, you’re getting inherent reliability and scaling abilities along with capabilities to address and handle failures. For example, let's say I deploy a cluster within an on-prem or cloud infrastructure region and it is spread across three different physical availability domains.

CCTV and the GDPR – an overview for small businesses

The GDPR requires data controllers and processors to implement “appropriate technical and organisational measures” to protect personal data. This entails an approach based on regular assessments to ensure that all risks are appropriately addressed. For instance, access to CCTV systems must be limited to authorised personnel, which is especially important where systems are connected to the Internet or footage is stored in the Cloud, and there is a greater risk of unauthorised access. Surveillance systems should also incorporate privacy-by-design features, including the ability to be switched on or off, and the option to switch off image or sound recordings independently where it would be excessive to capture both. CCTV equipment must also be of a sufficient quality and standard to achieve its stated purpose. The international standard for information security management, ISO 27001, is an excellent starting point for implementing the technical and organisational measures necessary under the GDPR.

Why a product team’s experimentation goes wrong

Why a product team’s experimentation goes wrong image
The only thing worse than not running experiments is running experiments that are misinterpreted. There are several ways in which companies misunderstand the statistics behind experiments. Firstly, companies are overly reactive to early returns. Early on during experiments there are few conversions and experiment results swing wildly. When teams “peak early” at results, they frequently overvalue the data and end experiments prematurely. It is very common for the direction of a metric to swing over the course of an experiment and teams that do not have the patience to wait are at the mercy of random chance. Secondly, stakeholders often create arbitrary pressures and deadlines to get answers early. In many business processes, management can improve productivity by introducing pressure and deadlines for teams. However, in the realm of science, this behaviour causes the opposite of the intended effect. Ordering teams to give results of by a certain date often causes teams to interpret insignificant data through gut feel. While these decisions can feel scientific to executives, they are all too often incorrect and give a false certainty about the wrong direction.

With Today’s Technology, Do You Really Need A Financial Advisor?

Finding an asset to invest in is one thing, but understanding how to implement it is another. Before investing in any funds, it is very important to study the historical data of the asset class. Sure, most of the time, past performance doesn’t necessarily correlate with future performance, but it is reasonable to think that some historical risk-reward relationships are likely to persist (i.e., long-term, stocks could be expected to outperform bonds, but with a higher degree of volatility). The financial advisor will look at all these and present you with an implementation plan that is likely to benefit you the most. While choosing assets to invest in, another aspect that clients usually overlook is taxes. If the future returns on an asset turn out to be average while the taxes on them are high, then the overall return for an investor would be negatively affected. This is why tax management is important, as tax-conscious financial planning and tax-efficient portfolio construction can lead to higher returns.

This company changes the DNA of investing — through machine learning

This company changes the DNA of investing — through machine learning
Simply put, a computer can be taught what `successful trading’ looks like, and combine such information from various users to build an investment portfolio that draws from their cumulative wisdom. It is no wonder, then, that financial giants such as JPMorgan Chase and Goldman Sachs are openly utilizing machine learning for their investing practices. After all, they have the resources and the data to make it work. However, this power is not reserved for these giant corporations. There are instances in which machine learning can benefit the ‘little guy’ as well. eToro’s declared mission is to disrupt the traditional financial industry and break down the barriers between private investors and professional-level practices. One such instance can be seen in eToro’s CopyFunds Investment Strategies, which are managed thematic portfolios, powered by advanced machine learning algorithms. This means private individuals now have access to technology previously reserved for giant corporations.

The Commercial HPC Storage Checklist – Item 2 – Start Small, Scale Large

As the HPC project moves into full-scale production, the organization then faces the opposite problem, making sure the system can scale large enough to continue to meet the capacity demands of the project. Scaling out requires meeting several challenges. First, the system has to integrate new nodes into the cluster successfully, since additional nodes provide the needed capacity and performance. However, adding another node is not always as straightforward as it should be. Many systems require adding the node manually as well as manually rebalancing data from other nodes to the new node. The Commercial HPC storage customer should look for an HPC storage system that can grow with them as their needs evolve. It should start small during the initial phases of development but scale large as the environment moves into production. The system should make the process of adding nodes as simple as possible; automatically finding available nodes, adding them to the cluster automatically and automatically rebalancing cluster data without impacting storage performance.

Quote for the day:

"Ever tried. Ever failed. No matter. Try again. Fail again. Fail better." -- Samuel Beckett

Daily Tech Digest - July 25, 2018

Are Initial Coin Offerings leaking money? image
The bottom line is that ICOs are being constructed with serious holes in them. Worse still, as the numbers from EY show, cyber criminals are taking advantage. Companies running ICOs are drawing huge sums of money in a very narrow window of time. If something goes wrong once the ICO is live, there little room for manoeuvring and precious little legal recourse that can realistically be taken. It’s the perfect conditions for cyber criminals to exploit. There is a high financial motivation and they’ve been drawn to ICOs like sharks drawn to churn in the water. The consequence of an attack? Well there’s two parties that could be affected there: the ICO organisers and the investors. Just one vulnerability is enough for attackers to steal investors’ money and do irreparable damage to the corporate reputation of the ICO organiser. The need to patch these holes is apparent but organisations are working on short time frames and might not realise where they are most vulnerable. So what are the main points of weakness?

Rolls-Royce Is Building Cockroach-Like Robots to Fix Plane Engines

Rolls-Royce believes these tiny insect-inspired robots will save engineers time by serving as their eyes and hands within the tight confines of an airplane’s engine. According to a report by The Next Web, the company plans to mount a camera on each bot to allow engineers to see what’s going on inside an engine without have to take it apart. Rolls-Royce thinks it could even train its cockroach-like robots to complete repairs. “They could go off scuttling around reaching all different parts of the combustion chamber,” Rolls-Royce technology specialist James Cell said at the airshow, according to CNBC. “If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.” Rolls-Royce has already created prototypes of the little bot with the help of robotics experts from Harvard University and University of Nottingham. But they are still too large for the company’s intended use. The goal is to scale the roach-like robots down to stand about half-an-inch tall and weigh just a few ounces, which a Rolls-Royce representative told TNW should be possible within the next couple of years.

While tight integration is desirable, these systems have many of the same networking challenges as other data center deployments, including requirements for scalability, automation, security and management of traffic flows. Additionally, they need to link to other data center resources inside the data center, at remote data centers and in the cloud. Software-defined networking architecture can ease some of the scaling, automation, security and connectivity challenges of hyper-converged system deployments. Hyper-converged systems integrate storage, computing and networking into a single system -- a box or pod -- in order to reduce data center complexity and ease deployment challenges associated with traditional data center architectures. A hyper-converged system comprises a hypervisor, software-defined storage and internal networking, all of which are managed as a single entity. Multiple pods can be networked together to create pools of shared compute and storage.

Big Tech is Throwing Money and Talent at Robots for the Home

CES 2018
Whether or not the robots catch on with consumers right away is almost beside the point because they’ll give these deep-pocketed companies bragging rights and a leg up in the race to build truly useful automatons. “Robots are the next big thing,” said Gene Munster, co-founder of Loup Ventures, who expects the U.S. market for home robots to quadruple to more than $4 billion by 2025. “You know it will be a big deal because the companies with the biggest balance sheets are entering the game.” Many companies have attempted to build domestic robots before. Nolan Bushnell, a co-founder of Atari, introduced the 3-foot-tall, snowman-shaped Topo Robot back in 1983. Though it could be programmed to move around by an Apple II computer, it did little else and sold poorly. Subsequent efforts to produce useful robotic assistants in the U.S., Japan and China have performed only marginally better. IRobot Corp.’s Roomba is the most successful, having sold more than 20 million units since 2002, but it only does one thing: vacuum.

“Enterprise Architecture As A Service” - How To Reach For The Stars

Educate those implementing your value chain in best open practices. To deliver EA As A Service, one would do well by ensuring services are delivered through best practices that are open because this enables an organization to train easily, hire selectively, and produce consistently. Of course, one might ask about differentiation – the secret sauce for differentiation will be in your proven ability to deliver fast and on target! Apply the best in class tools proven to improve production capability. Similar to the about decided upon and utilizing a consistent set of best in class tools helps ensure that deliverables are consistent among clients and enable reuse which can improve speed and quality of delivery. Tools that support the best open practices add even more. Collaborate with partners to evolve the best open practices. Keeping in mind that differentiation comes in how well you deliver EA As A Service, collaboration on the best open practices provides an avenue to improve the best practices based on real experiences, improves market perception, and helps keep the bar raised for the industry.

Micropsia Malware

Controlled by Micropsia operators, the malware is able to register to an event of USB volume insertion to detect new connected USB flash drives. This functionality is detailed in an old blog post. Once an event is triggered, Micropsia executes an RAR tool to recursively archive files based on a predefined list of file extensions ... Most of the malware capabilities mentioned above have outputs written to the file system which are later uploaded to the C2 server. Each module writes its own output in a different format, but surprisingly in a non-compressed and non-encrypted fashion. Micropsia’s developers decided to solve these issues by implementing an archiver component that executes the WinRAR tool. The malware first looks for an already installed WinRAR tool on the victim’s machine, searching in specific locations. In the event a WinRAR tool is not found, Micropsia drops the RAR tool found in its Windows Portable Executable (PE) resource section to the file system.

FinTech’s road to financial wellness

It’s one thing to build up a pot of money (saving), but it’s also vital to make that money work hard for you (investing). Investment platforms like Moneybox and Nutmeg are giving everyday people the ability to make their money go further. Robo-advice in particular is making it considerably easier for consumers to invest their money in a way that matches their circumstances and attitude to risk. A key benefit of these start-ups is that they often have low minimum investment limits, which has led to younger generations and those with small savings pots being able to invest. ... A recent report found the insurance sector lags only behind the utilitiessector when it comes to disappointing customers with a poor online customer experience. These bad experiences are causing consumers to be put off dealing with insurance and insurers, meaning those consumers often aren’t financially protected. InsurTech companies like Lemonade, however, are using behavioural economics and new technology to create aligned incentives between the insurer and the customer.

Securing Our Interconnected Infrastructure

While it's encouraging that the House is leaning forward on industrial cybersecurity and committed to authorizing and equipping the Department of Homeland Security to protect our critical infrastructure, this still remains largely a private sector problem. After all, over 80% of America's critical infrastructure is privately owned and the owners and operators of these assets are best positioned to address their risks. In doing so, one of the questions companies are asking themselves is how to reconcile the risks and rewards of the interconnected world. Should we simply retreat into technological isolationism and eschew the benefits of connectivity in the interest of security, or is there a better way to manage the risk? The former is gaining a growing chorus, especially among security researchers. The latest call comes from Andy Bochman of the Department of Energy's Idaho National Labs. Bochman argued this past May in Harvard Business Review that the best way to address the cyber-risk to critical infrastructure is "to reduce, if not eliminate, the dependency of critical functions on digital technologies and their connections to the Internet."

The race to build the best blockchain

Things move incredibly fast in the blockchain world. Ethereum is three years old. Projects like Cardano and EOS, sometimes called "blockchain 2.0" projects, are already considered to be giants in the space. They have a combined token market cap of roughly $11.8 billion despite barely being operational. Cardano, which focuses on a slow and steady approach, with every iteration of the software being peer reviewed by scientists, is promising, but it hasn't fully launched its smart contract platform yet. EOS, an incredibly well-funded startup that launched in June, is another huge contender. However, EOS has a complicated governance process which caused a fair amount of trouble right after the launch, together with a slew of freshly discovered bugs. With an estimated $4 billion in pocket, EOS has the means to do big things, but it will take some time to see whether it can live up to the promise.  But there's already a new breed of blockchain startups coming. They've been working, often in the shadows, to develop new concepts and technologies that may make the promise of a fast, decentralized app platform a reality.

Serverless vs. containers: What's best for event-driven apps?

Event processing is very different from typical transaction processing. An event is a signal that something is happening, and it often requires only a simple response rather than complex edits and updates. Transactions are also fairly predictable since they come from specific sources in modest numbers. Events, however, can originate anywhere, and frequency of events can range from nothing at all to tens of thousands per second. These important differences between transactions and events launched the serverless trend and also precipitated the strategy called functional programming. Functional programming is pretty simple. A function -- or lambda, as it is often called -- is a software component that contains outputs based only on the input. If Y is a function of X, then Y varies only as X does. For practical reasons, functions don't store data that could change their outputs internally. Therefore, any copy of a function can process the same input and produce the same output. This facilitates highly resilient and scalable applications.

Quote for the day:

"Rarely have I seen a situation where doing less than the other guy is a good strategy." -- Jimmy Spithill