Daily Tech Digest - July 19, 2018

6 usability testing methods that will improve your software

6 usability testing methods that will improve your software
Successful software projects please customers, streamline processes, or otherwise add value to your business. But how do you ensure that your software project will result in the improvements you are expecting? Will users experience better performance? Will the productivity across all tasks improve as you hoped? Will users be happy with your changes and return to your product again and again as you envisioned? You don’t find answers to these questions with a standard QA testing plan. Standard QA will ensure that your product works. Usability testing will ensure that your product accomplishes your business objectives. Well planned usability testing will shed a bright light on everything you truly care about: workflow metrics, user satisfaction, and strength of design. How do you know when to start usability testing? Which usability tests are right for your product or website? Let’s examine the six types of usability testing you can use to improve your software.



Facial Recognition Backlash: Technology Giants Scramble

Microsoft's president responded specifically to those allegations in his blog post, first touching on Microsoft's work with ICE, a law enforcement agency that is part of the U.S. Department of Homeland Security. "We've since confirmed that the contract in question isn't being used for facial recognition at all. Nor has Microsoft worked with the U.S. government on any projects related to separating children from their families at the border, a practice to which we've strongly objected," Smith said. Instead, the contract involves supporting the agency's "legacy email, calendar, messaging and document management workloads," Smith said. But at what point should an organization put down its foot with a federal agency operating in a manner to which at least some of its employees object? "This type of IT work goes on in every government agency in the United States, and for that matter virtually every government, business and nonprofit institution in the world," Smith said. "Some nonetheless suggested that Microsoft cancel the contract and cease all work with ICE."


How to Query JSON Data with SQL Server 2016


JSON (JavaScript Object Notation) is now the ubiquitous language for moving data among independent and autonomous systems, the primary function of most software these days. JSON is a text-based way to depict the state of an object in order to easily serialize and transfer it across a network from one system to the next -- especially useful in heterogeneous environments. Because a JSON string equates to a plain text string, SQL Server and any other relational database management system (RDBMS) will let you work with JSON, as they all allow for storing strings, no matter their presentation. That capability is enhanced in SQL Server 2016, the first-ever version that lets developers query within JSON strings as if the JSON were organized into individual columns. What's more, you can read and save existing tabular data as JSON. For a structured and comprehensive overview of the JSON functions in SQL Server 2016, read the "JSON Data (SQL Server)" MSDN documentation. Also, the "JSON Support in SQL Server 2016" Redgate Community article provides a more business-oriented view of JSON in SQL Server 2016, along with a scenario-based perspective of the use of JSON data in a relational persistence layer.


Heuristic automation prevents unmitigated IT disasters


IT platforms are constantly under attack from all sorts of possible malicious efforts, ranging from open port sweeping to intrusion attacks and denial-of-service assaults, such as the sophisticated distributed DoS move that took down Dyn in 2016. Historically, IT and security professionals identify that an attack is happening and then simply apply a defined means to deal with the problem. With heuristic automation in the mix, automation becomes responsive to changes in the IT environment caused by the attack. Instead of applying a simple and often ineffective fix, a heuristic IT management system looks at the IT deployment as an overall entity and applies the right fix for the situation. In this example, heuristic automation could change traffic patterns to offload incoming streams to a separate area of the platform and block certain traffic from access to those streams. It also could reallocate running workloads to a public cloud instead of the private cloud, or vice versa, to prevent service disruption. Provide the heuristics engine with information about possible attacks, and it can harden the platform in real time to prevent them from ever happening.


What’s new in the Anaconda distribution for Python

What̢۪s new in the Anaconda distribution for Python
Anaconda, the Python language distribution and work environment for scientific computing, data science, statistical analysis, and machine learning, is now available in version 5.2, with additions to both its enterprise and open-source community editions. ... This enterprise edition of Anaconda, released this week, adds new features around job scheduling, integration with Git, and GPU acceleration. Earlier versions of Anaconda Enterprise were built to allow professionals to leverage multiple machine learning libraries in a business context—TensorFlow, MXNet, Scikit-learn, and more. In version 5.2, Anaconda offers ways to train models on a securely shared central cluster of GPUs, so that models can be trained faster and more cost-effectively. Also new in Anaconda Enterprise is the ability to integrate with external code repositories and continuous integration tools, such as Git, Mercurial, GitHub, and Bitbucket. A new job scheduling system allows tasks to be run at regular intervals—for instance, to retrain a model on new data. 


Are organizations over-engineering their data centers?


With such incredible off-premise computing momentum, the potential impact of a wide-spread outage from a major data center provider grows daily. Enterprises are acutely aware of how outages could impact their mission-critical data – security was listed as a major concern for 77 percent of cloud users in RightScale’s report. Understandably, data center owners and operators have placed resiliency at the top of their priorities and turn to third-party certifiers to help address the most common root causes of outages, including human error, software issues, network downtime, and hardware failure with a corresponding failure of high availability architecture. However, there are limited offerings for data center operators to get a holistic audit of all factors that contribute to the resiliency of their services. We’ve been hearing directly from providers that existing offerings have not kept up with the pace of change in the industry. Incumbent programs will sometimes require a facility to be unnecessarily over-engineered. It’s not cost effective, and takes the focus away from what truly matters to enterprise users: security and reliability.


Raspberry Pi supercomputers: From DIY clusters to 750-board monsters

octapi-system.png
While the $35 Pi is by no means a computing powerhouse, in recent years enthusiasts have begun harnessing the power of armies of the tiny boards. There's a wide range of Pi clusters out there, from modest five-board arrangements all the way up to sprawling 750-Pi machines.If you're curious to find out more, then here's five Pi clusters built in recent years, starting with some you can try yourself and moving on to the Pi-based supercomputers being built by research labs. ... The Los Alamos National Lab (LANL) machine serves as a supercomputer testbed and is built from a cluster of 750 Raspberry Pis, which may later grow to 10,000 Pi boards. According to Gary Grider, head of its LANL's HPC division, the Raspberry Pi cluster offers the same testing capabilities as a traditional supercomputing testbed, which could cost as much as $250m. In contrast 750 Raspberry Pi boards at $35 each would cost just under $48,750, though the actual cost of installing the rack-mounted Pi clusters, designed by Bitscope, would likely be more. Grider highlights power-efficiency benefits too, and estimates that each board in a several-thousand-node Pi-based system would use just 2W to 3W.


LabCorp. Cyberattack Impacts Testing Processes

LabCorp. Cyberattack Impacts Testing Processes
"LabCorp immediately took certain systems offline as part of its comprehensive response to contain the activity," the company said in its SEC filing. "This temporarily affected test processing and customer access to test results on or over the weekend. Work has been ongoing to restore full system functionality as quickly as possible, testing operations have substantially resumed [Monday], and we anticipate that additional systems and functions will be restored through the next several days." Some customers of LabCorp Diagnostics may experience brief delays in receiving results as the company completes that process, LabCorp added. "The suspicious activity has been detected only on LabCorp Diagnostics systems. There is no indication that it affected systems used by Covance Drug Development," a research unit of LabCorp, the company said. "At this time, there is no evidence of unauthorized transfer or misuse of data. LabCorp has notified the relevant authorities of the suspicious activity and will cooperate in any investigation."


An introduction to ICS threats and the current landscape


An ICS is a key underlying element of the OT world. According to the National Institute of Standards and Technology report NIST SP 800-82 R2, "Guide to Industrial Control Systems (ICS) Security," ICS is a "general term that encompasses several types of control systems, including supervisory control and data acquisition (SCADA) systems, distributed control systems (DCS), and other control system configurations such as skid-mounted Programmable Logic Controllers (PLC) often found in the industrial sectors and critical infrastructures." ICS is used in the industrial, manufacturing and critical infrastructure sectors. For instance, railway controls are a type of SCADA. A street light controller may be a PLC, but it can also be part of a SCADA system. Finally, an ICS includes combinations of control components, including electrical, mechanical, hydraulic or pneumatic, that act together to achieve an industrial objective, such as manufacturing, transportation, or the distribution of material or energy.


Q&A on the Book Testing in the Digital Age

A good example for generating test cases can be the use of an evolutionary algorithm in testing automated parking on a car. You can imagine that with automatic parking, the amount of situations the car can be in are nearly infinite. The starting position may vary with surrounding cars positioned in many different ways, or other attributes that cannot be hit are around the car. The automatic parking function may not hit anything when parking and the car needs to be parked in a correct way. In this case we can generate a series of starting positions that the automatic park function needs to tackle. Ideally this is virtual so we can run a lot of tests quickly. It could be physical tests of course, but it would take more time in test execution. We need to define a fitness function that is evaluated with each test execution run. In this case it would be a degree of passing for the parked car. You can imagine some points for not hitting anything, and points for how well the car is parked in the end. Now we generate a series of tests and run them. Each outcome is evaluated and assigned a total points value.



Quote for the day:


"Strength lies in differences, not in similarities." -- Stephen R. Covey


Daily Tech Digest - July 18, 2018

illustration 2 416783146 1
The ability to understand everything that goes on within an environment, and then design and manage a network that can fully meet the needs of the enterprise, is reaching the point where it’s too complex for even a well-resourced team of engineers to achieve with certainty. The problem has become critical enough that without investing in intelligence in the datacentre, many businesses will face an increase in unplanned outages and expensive troubleshooting processes. Happily, there is a solution to this technological headache: Intent-Driven Networks. While the overall concept is new, Huawei’s Intent-Driven Network for CloudFabric Cloud Data Center Networking Solution is already available and handling some of the largest datacentre workloads. What makes Intent-Driven Networks innovative is the machine learning algorithms underpinning them. Machine learning has finally reached the point where it can help to understand how a network is being used. An artificial intelligence (AI) can then be part of the process to devise the right configuration for the network to achieve maximise availability and redundancy.



What serverless architecture actually means, and where servers enter the picture

"You adopt virtualization, [and] a lot of your people don't need to care as much about their metal. You adopt infrastructure-as-a-service in the cloud, [and] you're not needing to worry about the hypervisors any more. You adopt a PaaS, and there are other things that essentially go away. All become 'smaller teams' problems. "You adopt serverless, and for developers to be successful in developing and architecting applications that work on these platforms," Kersten continued, "they also have to learn more of the operational burden. And it may be different to your traditional sysadmin who is racking and stacking hardware, and having to understand disk speed and things like that, but the idea that developers get to operate in a pure bubble and not actually think about the operational burden at all, is completely deluded. It just isn't how I'm seeing any of the successful serverless deployments work. The successful ones are developers who have some operational expertise, have some idea of what it's like to actually manage things in production, because they're still having to do things."


Why Developers And Business Leaders Are Going Cloud Native


The infusion of cloud and software as a service (SaaS) technologies into enterprises has created complex hybrid information technology environments, each complicated by its own blend of tools and customizations. With legacy and next-generation cloud systems sitting side by side inside the enterprise, cloud-native technologies create a unified framework for these tools to work together and power the modern business. Do not confuse cloud-native with cloud computing; adopting cloud-native does not require the exclusive use of public cloud. Cloud-native is a way of thinking about and designing the components of software systems to optimize for distributed, cloud-based deployments. These deployments address increasingly urgent issues of scale and availability that enterprises of all sizes — not just the internet giants who pioneered the patterns and tooling associated with cloud-native — face. Cloud-native design consists of three component parts. Getting a piece of code that a developer writes (along with everything it depends on) deployed in production can be tough. Tools have emerged to help.


The Digital Transformation of Financial Reporting: Why XBRL Should be on Everyone’s Radar

In making data machine-readable through XBRL, the ESEF directive will make the financial information of more than 5,000 companies in the European Union easily transferrable across technologies that natively process XML, such as NoSQL databases. In the UK, according to a white paper by the Financial Reporting Council, more than two million companies already report using Inline XBRL (iXBRL) to HMRC, while another two million file their accounts using iXBRL with Companies House. However, ESEF will require many more companies, including all listed companies, to file digital accounts with XBRL in the near future. This is a sign of things to come in the UK and across the globe. The Bank of Japan was among the early adopters, but more recently the Bank of England announced a Proof of Concept (PoC) project to explore how XBRL could help it to significantly reduce the cost of change, drive resource efficiencies and improve speed and flexibility of access to large quantities of regulatory data from financial institutions.


The cybersecurity incident response team: the new vital business team

null
There are some important considerations to be made before starting a programme. These include operational and technical issues – such as securing the necessary equipment – as well as determining the resources and funding needed for newly formed teams. Firms must also ensure that existing teams are not left shorthanded and are still able to carry out their responsibilities. As with any team, the effectiveness of the CSIRT is greatly increased when it has a defined objective. When everyone within the team is clear on their role, it’s easier for them to pull in the same direction. Teams should be structured in a way that gives every member responsibility and accountability, but also defines who has the final say. During the planning phases it’s also essential to remove any areas of duplication. Re-doing activities and processes is a waste of resources and simply delays the time taken to reach the desired outcome. Companies can identify where overlaps and gaps exist by carrying out analysis on their current cyber response programmes. 


What is cloud networking up to now? It's complicated

From a technological perspective, an enterprise is only as agile as the network it operates on. As a cloud footprint expands, increasingly complex network policies that bind hybrid and multi-clouds together can significantly reduce a company's ability to pivot toward new technologies. Cloud orchestration and multiple cloud management platforms can be used to recapture business agility at the cloud networking level. Cloud orchestration can be thought of as the upper-level management layer that controls the various network automation building blocks that replaced manual tasks. Orchestration tools are used to develop intelligent business workflows that include various network requirements including application performance, network resiliency and security postures. Those policies can then be deployed throughout the entire cloud infrastructure. While cloud orchestration creates the foundation for end-to-end network control within a specific cloud platform, users are now seeking to gain the same orchestration benefits between two or more private and public cloud providers. This is where multiple cloud management platforms come into play.


Network visibility and assurance for GDPR compliance

Stack of legal documents with compliance and regulatory stamp
Since GDPR also restricts cross-border data transfers, it’s important that networking teams understand the country of origin of any particular data, and how that data will traverse the organization’s networks, remaining mindful of which paths it will take and where it will be stored. To assure and keep track of this information, therefore, businesses will require full visibility across their entire network, including in the data centers and – now, more than ever - the cloud. This holistic visibility across the entire service delivery infrastructure – from the wireless Edge to the Core to the datacenter and into the Cloud – can be achieved by continuous end-to-end monitoring and analysis of the traffic data, or “wire-data”, flowing over the network. With GDPR compliance, and Article 32, not to mention much of modern business activity, reliant on the availability of effective, resilient and secure infrastructure, it’s important that the right approach is taken to service assurance. Analysis of this wire-data in real-time will enable IT teams to generate smart data which can provide the end-to-end service-level visibility and actionable insights they need to deliver this assurance.


Digital transformation plan shaped by cloud, AI

One of the tools that I use to keep the organization focused is a structure called VSEM, for vision, strategy, execution and measurement. Vision is five years out; that's your vision for your entire IT organization. Strategy is two to four years out: what are your strategic initiatives, like moving to public cloud; there are five or six. All the projects and programs are under execution. M is measurement. Also, once a year my staff goes off site to talk about the scope, intent and mission that we're going to accomplish in the next 12 to 18 months. So, every year we come up with that intent and mission; that's the what. And from that we pick the technology that we need to work on; that's the how. ... It's joint ownership of objectives, so when I work on a project with India and the U.S., we have people in India and in the U.S. working on the same teams and the same initiatives with the same ultimate goal. That's what drives it. Some [other companies] will set up discreet centers of excellence, and that can work, but it can become islands.


Matching disaster recovery to cyber threats


“Unfortunately, there’s no magic eight ball when it comes to cyber security; it is a moving target. Just because something protected a business last year, does not mean it will keep the company safe this year,” he says. “Therefore, CIOs need to be particularly vigilant, carry out regular risk assessments of the business, and use this information to draw up a security plan that ensures there aren’t any vulnerabilities that can be exploited in the future.” The basis for this plan, he says, should be an understanding of the behavioural changes in people. “The best technological defences can be unwound by a social engineering attack, so it is important that employees are trained to be both the first and last lines of defence. Security plans should be reviewed regularly to try and stay one step ahead of threats as well as changes to technology used in the company.” Developing a disaster recovery plan takes significant time and effort. But Mike Osborne, founding partner of the Business Continuity Institute and executive chairman of Databarracks, says creating and implementing one for cyber security is particularly challenging.


7 Skills That Aren’t About to Be Automated

Machines have made great contributions to the quality and accessibility of education, from massive open online courses (MOOCS) to teaching simulations to Khan Academy lessons. In commercial organizations, though, where teaching requires understanding the context of a person’s development within the organization, managers and coaches shine. For example, when Ben Horowitz was the director of product management at Netscape, he faced a problem: Many managers on his team felt overworked, yet their efforts did not translate into successful evangelism for the products they were in charge of. He wrote a short document titled Good Product Manager/Bad Product Manager and used it to train his team on his basic expectations. What happened next shocked him: “The performance of my team instantly improved. Product managers that I previously thought were hopeless became effective. Pretty soon, I was managing the highest-performing team in the company.”



Quote for the day:


"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox


Daily Tech Digest - July 17, 2018


Red Hat announced the general availability of .NET Core 2.1 for its Red Hat Enterprise Linux and OpenShift container platforms. While .NET Core is a modular, open source, cross-platform (Windows, Linux and macOS) .NET implementation for creating console, Web and other apps, the Red Had version focuses on microservice and container projects. Its .NET Core 2.1 efforts primarily target enterprise Linux (RHEL) and OpenShift, the company's Kubernetes container application platform . ... "With .NET Core you have the flexibility of building and deploying applications on Red Hat Enterprise Linux or in containers," the company said in a blog post last week. "Your container-based applications and microservices can easily be deployed to your choice of public or private clouds using Red Hat OpenShift. All of the features of OpenShift and Kubernetes for cloud deployments are available to you." Red Hat said developers can use .NET Core 2.1 to develop and deploy applications 



Why banks like Barclays are testing quantum computing

“Quantum computing will become increasingly important over time,” he said. "In 20 years, quantum computing will not be just an option. It may be our only option, from an energy perspective, let alone from a computational standpoint.” Quantum computing got its start in 1981, but it still feels like science fiction. Quantum chips have to be kept at subzero temperatures in an isolated environment. They promise performance gains of a billion times and more, through the processors’ ability to exist in multiple states simultaneously, and therefore to perform tasks using all possible permutations in parallel. Currently, the chief use case banks and other financial firms see for it relates to investments. "Banks and financial institutions like hedge funds now appear to be mostly interested in quantum computing to help minimize risk and maximize gains from dynamic portfolios of instruments,” said Dr. Bob Sutor, vice president, IBM Q Strategy and Ecosystem. “The most advanced organizations are looking at how early development of proprietary mixed classical-quantum algorithms will provide competitive advantage."


Taking the temperature of IoT for healthcare

smart bandage tufts
Developed by researchers at Tufts University using flexible electronics, these smart bandages not only monitor the conditions of chronic skin wounds, but they also use a microprocessor to analyze that information to electronically deliver the right drugs to promote healing. By tracking temperature and pH of chronic skin wounds, the 3mm-thick smart bandages are designed to deliver tailored treatments (typically antibiotics) to help ward off persistent infections and even amputations, which too often result from non-healing wounds associated with burns, diabetes, and other medical conditions. Sameer Sonkusale, Ph.D., professor of electrical and computer engineering at Tufts University’s School of Engineering, a co-author of Smart Bandages for Monitoring and Treatment of Chronic Wounds, said in a statement: “Bandages have changed little since the beginnings of medicine. We are simply applying modern technology to an ancient art in the hopes of improving outcomes for an intractable problem.” It’s unclear if Tufts’ smart bandages will be internet connected, but the potential benefits of an IoT connection here seems obvious.


Google Announces Firestore Security Rules Simulator


Some of the functions that developers can write rule tests for include document reads, writes, and deletes, all of which can be tested against an organization’s actual Firestore database. There is also the option to simulate a particular user being signed in, which can be useful for testing permissions that may be assigned to various user accounts. In addition to releasing the Firestore Security Rules Simulator, Google also increased the number of calls that can be made per security rule from three to ten for single document requests. For those that are using batch-requests or other multi-resource requests, a total of 20 combined calls is allowed for all of the documents included in the call. Google also mentioned that is has improved its reference documentation related to Firebase Security Rules and the specific language that is used to write them. Security is something that cannot be taken lightly, especially with the amount of data that is stored in the cloud today. While there are many advantages to having data stored in the cloud or hybrid environments, security has to be one of the top priorities of developers and administrators that work with that data on a daily basis.


No blank checks: The value of cloud cost governance

No blank checks: The value of cloud cost governance
Although you can make a case for the cloud’s value around agility and compressing time to market, that will fall on deaf ears among your business leaders if you’re 20 to 30 percent over budget for ongoing cloud costs. There’s no reason to not know your ongoing cloud costs. In the planning phase, it’s just a matter of doing simple math to figure out the likely costs month to month. In the operational phase, it’s about putting in cost monitoring and cost controls. This is called cloud cost governance. Cloud cost governance uses a tool to both monitor usage and produce cost reports to find out who, what, when, and how cloud resources were used. Having this information also means that you can do chargebacks to the departments that incurred the costs—including overruns. But the most important aspect with cloud governance is not monitoring but the ability to estimate. Cloud cost governance tools can tell you not just about current use but also about likely costs in the future. You can use that information for budgeting.


The 5 factors driving workers away from the gig economy

gigeconomy.jpg
The success doesn't come as a huge surprise, as gig economy jobs have many advantages. For one, employees set their own schedules, simply logging onto the app when they feel like working. That means no more requesting off for doctor's appointments or vacations—you are able to work on your own time. Gig economies provide more opportunity for more people, Forrester researchers Marc Cecere and Matthew Guarini wrote in a January 2018 report. Between students, retirees, underemployed workers, remote employees, and other non-traditional professionals, the gig economy opens its doors to any and all backgrounds, said the report. Companies also benefit from the gig economy with its significantly lower costs. Saving money is the number one reason companies form or adopt the contingent workforce concept, according to the Forrester report. Since freelance employees have to provide their own devices, vehicles, and work spaces, companies save big. And with freedom for employees to opt in or out at any time, companies get to eliminate the expense of recruiting firms, the report found.


Blockchain and bluster: Why politicians always get tech wrong

Part of the problem is that few politicians really understand or care about technology. Government technology projects regularly run over time and over budget, and still fail to deliver their design goals -- often because ministers fail to grasp the complexities involved. Twenty years ago such an attitude towards technology on the part of our elected representatives might have been understandable, if unwise. Now it's positively dangerous: technology is one of the biggest drivers of change in society, because it underpins almost everything we do. It's one of the biggest sources both of threats and opportunities. Issues from artificial intelligence to fake news to mass surveillance -- and yes, perhaps even blockchain -- require an informed and engaged political class, able to steer societies around potential risks and make the right decisions about how we should best employ these innovations. For politicians, bashing tech companies to score cheap points and then hoping that immature technology will save them from self-inflicted problems is no longer an option.


Getting to Know Graal, the New Java JIT Compiler


It must be clearly understood that despite the enormous promise of Graal and GraalVM, it currently is still early stage / experimental technology. It is not yet optimized or productionized for general-purpose use cases, and it will take time to reach parity with HotSpot / C2. Microbenchmarks are also often misleading - they can point the way in some circumstances, but in the end only user-level benchmarks of entire production applications matters for performance analysis. One way to think about this is that C2 is essentially a local maximum of performance and is at the end of its design lifetime. Graal gives us the opportunity to break out of that local maximum and move to a new, better region - and potentially rewrite a lot of what we thought we knew about VM design and compilers along the way. It's still immature tech though - and it is very unlikely to be fully mainstream for several more years. This means that any performance tests undertaken today should therefore be analysed with real caution.


Mobile devices lost in London underline security risk


Mobile phones represent the greatest risk of identity theft to individuals and important data loss to businesses, the report said. Laptops represent the next most commonly lost device, with a total of 1,155 lost, followed by tablet computers, with 1,082 devices lost. Barry Scott, CTO for Europe at identity and access management firm Centrify, said that with tens of thousands of electronic devices going missing every year, businesses need to wake up to the fact that fraudsters will be attempting to gain access to critical information through lost or stolen devices. “With cyber attacks increasing at an alarming rate, simple password-based security measures are no longer fit for purpose,” he said. Instead, Scott said businesses needed to adopt a zero-trust approach, verifying users, their devices and limiting access to the volume of data they can access. “Failure to take action acts as an open invitation to cyber criminals and hackers, who see lost devices as an easy way into a corporate enterprise,” he said.


Data Quality Evolution with Big Data and Machine Learning


With limited data sets and structured data, data quality issues are relatively clear. The processes creating the data are generally transparent and subject to known errors: data input errors, poorly filled forms, address issues, duplication, etc. The range of possibilities is fairly limited, and the data format for processing is rigidly defined. With machine learning and big data, the mechanics of data cleansing must change. In addition to more and faster data, there is a great increase in uncertainty from unstructured data. Data cleansing must interpret the data and put it into a format suitable for processing without introducing new biases. The quality process, moreover, will differ according to specific use. Data quality is now more relative than absolute. Queries need to be better matched to data sets depending on research objectives and business goals. Data cleansing tools can reduce some of the common errors in the data stream, but the potential for unexpected bias will always exist. At the same time, queries need to be timely and affordable. There has never been a greater need for a careful data quality approach.



Quote for the day:


"Change the changeable. Accept the unchangeable. And remove yourself from the unacceptable." -- Denis Waitley


Daily Tech Digest - July 16, 2018

echo dot and home mini
The most pronounced difference between the two speakers is in their respective digital assistants: Amazon Alexa and Google Assistant. Note that both are constantly evolving and adding new features and capabilities, so any comparison is based on a snapshot in time. That said, tests of both products by TechHive and other tech publications all generally agree: Amazon Alexa excels as a tool for ordering stuff, while Google Assistant wins out when it comes to general search and information requests. Both platforms are pretty good when it comes to controlling other smart home devices and systems, although Amazon was more aggressive early on when it came to working with third-party developers. Google, however, has come a long way on that front. So if you envisage your primary use to be adding items to your Amazon shopping cart—“Alexa, reorder coffee”—then Alexa is the way to go. If you want to use it less for shopping and more for information—“Hey Google, how long will it take for me to get to Sacramento by train?”—then Google might have the edge (Alexa responded to that query with driving time.) If you’re looking to control your other devices in your home, check to see which platform is the most compatible with what you have. More on that in a bit.



Are security professionals moving fast enough?

Are security professionals moving fast enough? image
Of course, it is difficult for security professionals to pick apart the wheat from the chaff when it comes to machine learning and AI. Unfortunately, there are many vendors simply slapping on AI to their messaging, but if you scratch beneath the surface, it’s nothing more than words. This makes it harder for organisations to know if what they are being promised is true and can lead to much cynicism, perhaps a reason why so few businesses are investing in these technologies. It’s more important now than ever before that enterprises shift from the manual and into the automated world, and harness technologies that can carry out some of this heavy lifting. Regulation, such as the GDPR, almost makes this an imperative with the stipulated reporting timelines. The job of a technology team has become much harder with the increased amount of cyber threats and how rapidly they are evolving. So if there are ways to save time on other jobs surely they should be grasping them with both hands. Now it’s time for security professionals to pick up the pace.


4 essential questions to audit your agile process

4 essential questions to audit your agile process
You check your blood pressure, tire pressure, and your stock prices. But when was the last time you audited your agile? Even experienced agilists can fall into bad habits, and it’s important to catch them early. That’s why I recommend auditing your agile process every six months. It might sound daunting, but everything you need boils down to four questions you can fit on a standard 3x5 notecard. During your next retrospective, ask your team to answer each of the following questions with a five-star rating scale. Five stars means you have a superawesome process, and one star essentially means you have no process or it’s really poor. Of course, most scores will be in between, but they will help your team focus on improving the weakest points. ... Each person on the team is responsible for calling out where stories lack clarity. Poorly constructed stories result in churn and wasted time. Developers, quality engineers, and product owners must agree on definition, business value, requirements, and internal dependencies. Otherwise, you’ll find bugs, pushbacks, and failure to sign off on a completed story.


What Is A Net Promoter Score (NPS)?

vacation policy builds loyalty
Once you gather the survey data, your company’s NPS is determined by subtracting the percentage of detractors from the percentage of promoters, while passives count towards the total number of respondents. ... It’s easy enough to calculate your organization’s NPS manually, but if you want to outsource the process, there are third-party services that will help you send out surveys and determine your score. ... A good net promoter score is technically anything above zero, which means you have more promoters than detractors. The worst score you can get is a -100, which means you do not have a single promoter and that all your customers are detractors – vice versa for a score of 100. A score of 50 or more is considered excellent. ... The result of NPS is a straightforward metric that companies can use to gauge customer loyalty and the health of the company’s brand. It’s just one question, but it’s an important metric for helping businesses understand where they stand in the market and determine whether their effort is better spent on maintaining customers' satisfaction or if it’s time to try winning back unhappy customers.


Microsoft Teams free version takes on Slack, Cisco Webex

With the free Teams product, Microsoft is telling it's largest rivals -- Cisco and Slack -- that the company is in the market "to win it -- or at least significantly disrupt it," Kurtzman said. However, the competitors have advantages. Slack has more than 1,500 third-party app integrations, and Cisco's Webex Teams is a video-centric collaboration platform that works well with Cisco's networking hardware and software. Microsoft is preparing for battle by simplifying its collaboration portfolio. The company has said it will replace Skype for Business Online with Teams, a move that raised concerns that Teams won't have the same telephony tools. Microsoft has tried to ease customer anxiety by rolling out Teams calling features, such as call delegation and direct routing. Call delegation lets a user receive someone else's call -- a necessary feature within enterprises. Direct routing enables companies to use their existing telephony infrastructure with Teams. However, accessing that function requires a company to have Teams and Phone System -- formerly called Cloud PBX -- as part of an Office 365 subscription.


All you need to know about the move from SHA-1 to SHA-2 encryption

All you need to know about the move from SHA-1 to SHA-2
SHA-2 is the cryptographic hashing standard that all software and hardware should be using now, at least for the next few years. SHA-2 is often called the SHA-2 family of hashes because it contains many different-size hashes, including 224-, 256-, 384-, and 512-bit digests. When someone says they are using the SHA-2 hash, you don’t know which bit length they are using, but the most popular one is 256 bits (by a large margin). Although SHA-2 shares some of the same math characteristics as SHA-1 and minor weaknesses have been discovered, in crypto-speak it's still considered "strong” for the foreseeable future. Without question, it's way better than SHA-1, and any critical SHA-1 enabled certificates, applications, and hardware devices using SHA-1 should be moved to SHA-2. All major web browser vendors (e.g. Microsoft, Google, Mozilla, Apple) and other relying parties have requested (and have been doing so for years) that all customers, services and products currently using SHA-1 move to SHA-2, although what has to be moved by when is different depending on the vendor.


Quantum-secured network ‘virtually un-hackable’

Quantum-secured network ‘virtually un-hackable’
Photons, as used in the quantum-key distribution work, will likely end up securing future networks and could turn out to be a crucial element to upcoming quantum computing overall. The particles of light are good for moving qubits (quantum information carriers) because they can travel distances and work with fabricated chips, explained the University of Maryland in a news article announcing what it said is a breakthrough in photon-carried quantum computing. The school said it has invented the first single-photon transistor from a semiconductor chip — a photon transistor, in other words. Traditional transistors are the miniscule routing switches used in every form of computing. Producing a photon-based one, where the switches interact with each other, could “attain exponential speedup for certain computational problems.” Photons don’t natively interact — a prior downside. “Roughly 1 million of these new transistors could fit inside a single grain of salt. It is also fast and able to process 10 billion photonic qubits every second,” the school said. “Quantum communications technologies are starting to play a significant role in securing our data and communications," said Dr. Grégoire Ribordy


What Is Geospatial Data and How Can It Save Your Life?

Using the in-memory database and application platform SAP HANA, SAP has developed a prototype that helps organizations analyze geospatial data and predict how storms can impact a given region. After years of collaboration with Esri, a leader in geographical information systems, the two companies announced tighter integration between SAP HANA and Esri’s “geodatabase” in January. This allows customers to analyze geographic information within their business processes and take action more easily. Previously, customers had to analyze location data separately from business applications, then combine them. As Hasso Plattner, co-founder of SAP and chairman of the Supervisory Board of SAP SE, pointed out at SAPPHIRE NOW, SAP just took spatial capabilities one step further and released them as services that can pull weather or satellite data directly from providers into the enterprise data layer. Customers can now create location-aware application more quickly using this functionality, part of the recently-announced SAP HANA Data Management Suite.


EU Lawmakers Threaten Business Relying On Privac Shield

EU lawmakers threaten businesses relying on Privacy Shield
The EU’s General Data Protection Regulation, like its predecessor the Data Protection Directive, authorizes the export of EU citizens’ personal information only to jurisdictions that provide an adequate level of privacy protection. Privacy Shield, an agreement signed by EU and U.S. officials in 2016, seeks to reconcile the different levels of legal protection afforded on each side of the Atlantic, allowing businesses to export EU citizens’ data to the U.S. for processing. The EU’s executive body, the European Commission, ruled in 2016 that the Privacy Shield deal provided adequate protection for personal information, but called for it to be reviewed annually. It’s with an eye on the next review of the agreement, in September, that Members of the European Parliament called for the deal to be suspended in a vote on July 5. The Parliament’s resolution on Privacy Shield identified several areas in which U.S. authorities had not yet met their commitments under the agreement, despite having been given a deadline of May 25, 2018. The U.S. Senate has still not ratified the appointment of three members of the Privacy and Civil Liberties Oversight Board (PCLOB), including its chairman.


5 Essentials to Achieve IT Resilience

5 Essentials to Achieve IT Resilience
Many cloud backup and storage solutions have appeal because they offer cloud storage and data access and restore from anywhere. However, such solutions don’t offer capabilities that allow users to totally recover applications, servers, and entire business operations in a tight timeframe. Because of this, companies require IT resilience that is affordable and effective. This means that solutions must offer automated, seamless access to your data and applications. But what do solutions need, specifically, to achieve resilience? A few key elements of technology must be present for IT Resilience and Assurance (ITRA) to be achieved. These components are anomaly detection, backup, deduplicated file system assisted replication, orchestration, and assurance. That’s a lot to take in one sentence, so let’s break them down! ... Anomaly detection is a feature that enables users to predictively detect a risk to their systems. This capability allows users to receive an early warning if activity happening with their data could be related to a ransomware or other kind of malware attack. Signs that a ransomware attack is occurring include affected files being renamed, causing them to appear to be new files when backed up.



Quote for the day:


"No one really succeeds everyday but successful people do something everyday to help themselves succeed." -- @LeadToday


Daily Tech Digest - July 15, 2018

“Enterprise Architecture As A Service” – What?


Recent success results in organizations having to deal with big decisions on ways to invest and maintain their success. Perceived failure results in a need to make decisions to address the failures. Each of these scenarios gets attention during the strategic planning process and, as pointed out in “Enterprise Architecture as Strategy” by Jeanne W. Ross, Peter Weill, and David Robertson, Harvard Business School Press, 2006, EA is a useful tool. The bottom line is that big decisions are looming and there is a perception that EA can help by defining “the organizing logic for business processes and IT Infrastructure, reflecting the integration and standardization requirements of the company’s operating model” so that “individual projects can build capabilities – not just fulfill immediate needs”. But there is another, less positive, perception out there – EA can be a money sink! It could result in tons of paper, take years, result in something outdated by the time it is finished, just to name a few concerns. Also the need for change has a timeline shorter than the perceived timeline of generating an Enterprise Architecture. 


HTC’s blockchain phone is real, and it’s arriving later this year

phone-components_desktop
Prior to the launch, the company is partnering with the popular blockchain title, CryptoKitties. The game will be available on a small selection of the company’s handsets starting with the U12+. “This is a significant first step in creating a platform and distribution channel for creatives who make unique digital goods,” the company writes in a release tied to the news. “Mobile is the most prevalent device in the history of humankind and for digital assets and dapps to reach their potential, mobile will need to be the main point of distribution. The partnership with Cryptokitties is the beginning of a non fungible, collectible marketplace and crypto gaming app store.” The company says the partnership marks the beginning of a “platform and distribution channel for creatives who make unique digital goods.” In other words, it’s attempting to reintroduce the concept of scarcity through these decentralized apps. HTC will also be partnering with Bitmark to help accomplish this. If HTC is looking for the next mainstream play to right the ship, this is emphatically not it.


Interview: Bill Waid talks about AI ML


What is interesting about this well-known and often referenced use of AI/ML, is the potential opportunity cost. Despite the significant savings realized, the impact of declining a customer transaction that was not fraudulent leads to and even more costly unsatisfactory customer engagement and eventual attrition. To operationalize this AI/ML solution and fully realize the value, decisioning and a continuous improvement feedback loop was required. Capitalizing on the power of AI/ML, FICO has expanded both the algorithms and application of AI/ML to a broad set of solutions since 1992. Most notable is the use of ML to find predictive patterns in the ever-expanding Data Lakes our clients are collecting and using those ML findings to augment existing decisions and incrementally improve business outcomes. By deploying ML models in a way that the decision outcome could be managed and monitored to ensure accuracy, business owners could learn from the ML model and gain confidence that the model was indeed providing tangible improvement. This last innovation was a natural evolution to what FICO refers to as explainable AI (xAI).


How AI will change your healthcare experience (but won’t replace your doctor)

AI in healthcare
Techniques such as machine learning enable healthcare providers to analyze large amounts of data, allowing them to do more in less time, and supporting them with diagnosis and treatment decisions. For example, suppose you feed a computer program with a large amount of medical images that either show or do not show symptoms of a disease. The program can then learn to recognize images that may point towards the disease. For example, researchers at Stanford developed an algorithm that helps to evaluate chest X-rays to identify images with pneumonia. This doesn’t mean, however, that the radiologist will no longer be needed. Instead, think of AI as a smart assistant that will support doctors, alleviating their workload. This is also how we approach AI at Philips: we work together with clinicians to develop solutions that make their lives easier and improve the patient experience. That’s why we believe in the power of adaptive intelligence. It’s not really about AI per se – it’s about helping people with technology that adapts to their needs and extends their capabilities.


Machine learning will redesign, not replace, work

"Any manager could take this rubric, and if they're thinking of applying machine learning this rubric should give them some guidance," he said. "There are many, many tasks that are suitable for machine learning, and most companies have really just scratched the surface." ... Since a job is just a bundle of various tasks, it's also possible to use the rubric to measure the suitability of entire occupations for machine learning. Using data from the federal Bureau of Labor Statistics, that's exactly what they did—for each of the more than 900 distinct occupations in the U.S. economy, from economists and CEOs to truck drivers and schoolteachers. "Automation technologies have historically been the key driver of increased industrial productivity. They have also disrupted employment and the wage structure systematically," the researchers write. "However, our analysis suggests that machine learning will affect very different parts of the workforce than earlier waves of automation … Machine learning technology can transform many jobs in the economy, but full automation will be less significant than the reengineering of processes and the reorganization of tasks."


Reinventing The Enterprise - Digitally

Through autonomization and emergence, self-tuning firms create significant advantages. They can better understand customers by leveraging data from their own ecosystems and platforms to develop granular insights and automatically customize their offerings. They can develop more new, marketable products by experimenting with offerings and leveraging proprietary data. And they can implement change more quickly and at lower cost by acting autonomously.  The benefits of autonomization and emergence well exceed those that can be realized from digitization programs aiming to increase efficiency or product innovation alone. They are compounded by self-reinforcing network and experience effects: better offerings attract more customers and more data; experimentation brings knowledge that increases the value of future experimentation. One example of a self-tuning organization is Alibaba. Not only does its e-commerce platform provide a sea of user data, but the company uses it to generate real-time insights in a granular manner.


Two studies show the data center is thriving instead of dying

data center
The top reasons for such investment are security and application performance (75% of respondents) and scalability (71%). It also found that 53% of respondents intend to increase investment in software-defined storage, 52% in NAS and 42% in SSD ... IHS noted that new technologies such as artificial intelligence and containers are gaining traction, traditional data center apps, such as Microsoft Office (22%), collaboration tools such as email, SharePoint, and unified communications (18%), and general-purpose IT apps (30%) are still being used. The second survey comes from SNS Telecom & IT, a market research firm based in Dubai, UAE. It attributes the growth in big data and the subsequent massive inflow of all sorts of unstructured data as the reason for investment in IT equipment by the financial services industry. “As this Big Data construct expands to include streaming and archived data along with sensor information and transactions, the financial sector continues its steady embrace of big data analytics for high-frequency trading, fraud detection and a growing list of consumer-oriented applications,” said the authors.


Despite the security measures you've taken, hacking into your network is trivial

Closing security vulnerabilities and establishing effective cybersecurity policies and procedures is going to require more than just better technology. Effective security will demand a complete change of attitude by every employee, executive, and individual operating a computing device. Security must become the priority, even at the expense of convenience. Confirming results reported in other studies, the Positive Technologies research showed that more than a quarter of employees still inexplicably clicked a malicious link sent to them in an email. Despite extensive training and retraining, employees--regardless of industry or level of technical knowledge--continue to operate with an almost unconscious lack of security awareness. Until this cavalier attitude toward protecting company data changes, phishing attacks and authentication circumvention will continue to plague the modern enterprise.


The Economics Of AI - How Cheaper Predictions Will Change The World


Key to this, they argue, will be whether human AI “managers” can learn to differentiate between tasks involving prediction, and those where a more human touch is still essential. When I met with Joshua Gans – professor of strategic management and holder of the Jeffrey S Skoll Chair of Technical Innovation and Entrepreneurship at the University of Toronto – he gave me some insight into how economists are tackling the issues raised by AI. "As economists studying innovation and technological change, a conventional frame for trying to understand and forecast the impact of new technology would be to think about what the technology really reduces the cost of," he tells me. "And really its an advance in statistical methods – a very big advance – and really not about intelligence at all, in a way a lot of people would understand the term ‘intelligence.' ... “When I look up at the sky and see there are grey clouds, I take that information and predict that it’s going to rain. When I’m going to catch a ball, I predict the physics of where it’s going to end up. I have to do a lot of other things to catch the ball, but one of the things I do is make that prediction.”


Creating a Defensible Security Architecture

Controls should not only face the Internet but implemented to secure authorized access from internal assets to internal assets. Basic adjustments such as this allow for far superior prevention controls and, more importantly, detection controls. Think about this for a moment: If a computer on a subnet or zone A attempts to talk to any system found in zone B and the system from A is not allowed, then the connection will be denied, and you will be notified of that. Basic firewall rules aren't rocket science, but they are highly effective controls. Modern challenges also must be overcome. For instance, consider an intrusion detection/prevention device, web proxy, data loss prevention sensor, network antivirus, or any other Layer 7 network inspection solution. These are all crippled by network encryption. Your brand-new shiny NGFW may not be configured to handle 70%+ of the traffic going through it. Basically, without understanding technologies like Secure Sockets Layer (SSL) inspection, SSL decrypt mirroring, HTTP Strict Transport Security (HSTS), certificate transparency, HTTP Public Key Pinning (HPKP), how can you handle modern encryption?



Quote for the day:


"Technology makes it possible for people to gain control over everything, except over technology." -- John Tudor


Daily Tech Digest - July 14, 2018


To date, the tools which underpin workforces have been developed as a natural extension of traditional work flows. Email replaced the memo, and video chat made the conference call more collaborative. But emerging technologies like advanced analytics, artificial intelligence, and machine learning are primed to provide a comprehensive look into the patterns and intricacies that make up the individual workplace experience. For example, with the right platform, IT departments can better understand which channels employees prefer, what is drawing them to these channels, and how they can better optimize it for even further productivity. Alternatively, they can identify problem areas within work flows and proactively ease the strain on employees themselves. As technology becomes more advanced, the human element becomes increasingly vital. Digital transformation saw a seismic shift in the way IT leaders approach their infrastructure, but workplace transformation requires a deep understanding of the unique ways individuals approach productivity.


Entity Services Increase Complexity

Entity services are modelled after defined entities (or nouns) within a system. For example, an accounts service, order service and customer service. Typically they have CRUD like interfaces which operate on top of these entities.  By taking this CRUD like approach, entity services tend to not contain any meaningful business functionality. Instead, they are shallow modules, not really offering any complex or useful abstractions. ... Ultimately, these shallow entity services can turn into a cluster of highly coupled components, write Abedrabbo. This leads to an operational burden, where more components must be deployed, scaled and monitored. This high coupling can also lead to challenging release processes, where many microservices must be deployed in order to deliver a single piece of functionality.  It can also produce single points of failure, where many services depend on each other, meaning that if one fails it can bring down the entire system. Abedrabbo also explains that entity services create conceptual complexity, as the knowledge of how to compose them is not immediately obvious. 


 An exciting time to be in cyber security innovation


There is a wide range of initiatives specifically around cyber security in the UK, says Chappell, including the Cyber Growth Partnership, which supports fast-growing security companies. “There are some great opportunities is this sector, which is partly due to our UK heritage going back to Bletchley Park,” he says. The UK also benefits from having top students from all over the world who come to further their education, a thriving financial sector and a strong defence sector. “We are lucky to have this heady mix of components that create an environment where it is great to be building a business,” says Chappell. Also, thanks to the likes of companies such as Message Labs and Sophos, the UK has useful templates or archetypes for fast-growing successful businesses that startups can draw upon, he adds. The growing number of incubators is also creating opportunities for cyber security innovators, with Lorca being the latest to join its sister centre in Cheltenham, the NCSC Cyber Accelerator, CyLon and its HutZero bootcamp for entrepreneurs.


Reddit Co-Founder Alexis Ohanian's Top Self-Care Strategies for Entrepreneurs

“Entrepreneurs have to have enough ego to think that our crazy idea, our vision for the future is going to work, before anyone else does. But [it’s important to] balance that with enough humility to know that you aren’t going to have all the answers,” Ohanian says. “You are going to need to rely on different points of view. Get the benefit of someone who is detached enough to give you honest feedback, but attached enough to know all the players and background information.” Ohanian’s feeling is, if you wouldn’t expect a talented athlete or sports team to play without their coach, why shouldn’t it be the same for a great entrepreneur? ... “One of the things founders and CEOs in particular should always be doing and keeping top of mind is celebrating those wins for their business,” Ohanian says. “It will never feel like a 100 percent win for the CEO or founder, because you’re always thinking about the 100 other things that need to get improved or fixed. But for all the people on your team, it is really vital to celebrate them and that success. Not in a way that gets people complacent, but rejuvenated and re-excited about the mission and vision.”


Why You Should Consider A Career In Cybersecurity


Cybersecurity professionals are generally among the most highly-compensated technology workers. According to the United States Department of Labor, the median annual wages for information security analysts is almost $100,000 nationally, with many jobs in various locations paying considerably higher. With the demand for cybersecurity professionals continuing to far outpace the supply, salaries are likely to continue rising. As such, investing in cybersecurity training now can pay off quite handsomely ... For multiple reasons, many companies are far less likely to let go of cybersecurity professionals than they would other employees. Shrinking the security team may increase the likelihood of a breach, and can dramatically increase the impact of a breach should one occur; think for a moment about customers’ and regulators’ reactions to news reports that “A large amount of personal data leaked after company X tried to save money by reducing its cybersecurity staff.” Of course, as alluded to before, another deterrent against letting information security professionals go is that employers know that it is often both difficult and expensive to find suitable replacements.


Let There Be Sight: How Deep Learning Is Helping the Blind ‘See’

Guide dogs are great for helping people who are blind or visually impaired navigate the world. But try getting a dog to read aloud a sign or tell you how much money is in your wallet. Seeing AI, an app developed by Microsoft AI & Research, has the answers. It essentially narrates the world for blind and low-vision users, allowing them to use their smartphones to identify everything from an object or a color to a dollar bill or a document. Since the app’s launch last year, it’s been downloaded 150,000 times and used in 5 million tasks, some of which were completed on behalf of one of the world’s most famous blind people. “Stevie Wonder uses it every day, which is pretty cool,” said Anirudh Koul, a senior data scientist with Microsoft, during a presentation at the GPU Technology Conference in San Jose last month. A live demo of the app showed just how powerful it can be. Koul had a colleague join him on stage, and when he launched the app on his smartphone and pointed it toward his co-worker, it declared that it was looking at “a 31-year-old man with black hair, wearing glasses, looking happy.”


Graphing the sensitive boundary between PII and publicly inferable insights

window-1231894_1280-geralt-pixabay
There is a fuzzy boundary between information that’s personally identifiable and insights about persons that are publicly inferable. GDPR and similar mandates only cover protection of discrete pieces of digital PII that that are maintained in digital databases and other recordkeeping systems. But some observers seem to be arguing that it also encompasses insights that might be gained in the future about somebody through analytics on unprotected data. That’s how I’m construing David Loshin’s statement that “sexual orientation [is] covered under GDPR, too.” My pushback to Loshin’s position is to point out that it’s not terribly common for businesses or nonprofits to record people’s sexual orientation, unless an organization specifically serves one or more segments of the LGBTQ community — and even then, it’s pointless and perhaps gauche and intrusive to ask people to declare their orientation formally as a condition of membership. So it’s unlikely you’ll find businesses maintaining PII profile records stating that someone is gay, lesbian, bisexual or whatever.


Ultimate Guide To Blockchain In Insurance

Within insurance, the claims and finance functions are high-value areas where blockchain could be beneficial, especially when you look at processes that need ongoing reconciliation with external parties. Consider how often Company A has a claim against Company B resulting in the exchange of money, typically in the form of a paper check or an electronic transaction. That could be completely automated using blockchain. Presently, many insurers are applying a smart contract alongside the blockchain, which is triggered when well-defined terms and conditions are met. By setting up an insurance contract that pays out under these circumstances, an insurer can process transactions with no human intervention and greatly enhanced customer service. In other words, blockchain can help deliver on the digital opportunities that insurers must get right. These opportunities aren’t glamorous but they’re important: as I’ve said before, get them right and you won’t win—but get them wrong and you will lose. Blockchain can help insurers deliver on some brilliant basics.


Preparing Your Business For The Artificial Intelligence Revolution


Artificial intelligence can be used to solve problems across the board. AI can help businesses increase sales, detect fraud, improve customer experience, automate work processes and provide predictive analysis. Industries like health care, automotive, financial services and logistics have a lot to gain from AI implementations. Artificial intelligence can help health care service providers with better tools for early diagnostics. The autonomous cars are a direct result of improvements in AI. Financial services can benefit from AI-based process automation and fraud detection. Logistics companies can use AI for better inventory and delivery management. The retail business can map consumer behavior using AI. Utilities can use smart meters and smart grids to decrease power consumption. The rise of chatbots and virtual assistants are also a result of artificial intelligence. Amazon's Alexa, Google's Home, Apple's Siri and Microsoft's Cortana are all using AI-based algorithms to make life better. These technologies will take more prominent roles in dictating future consumer behavior.


Prime Minister Of Luxembourg Xavier Bettel On Technology, Culture And People

“Current” is always a bit of a difficult word when it comes to technology, because innovative ideas or products often grow and mature in waves. Consequently, over time, new technologies experience highs during which they are heavily publicized and on everybody’s mind. They also go through lows, during which they appear to be completely forgotten. Yet, the research continues! Having said that, I am actually very fond of the world of virtual and augmented reality. Yes, the technology, or at the very least the idea and concepts of VR and AR, have been around for quite some time now. But it is truly exciting to discover all the new opportunities these technologies offer us thanks to the recent advances in computing power, be it in the medical domain, in education, in transport…they make our world better and safer! ... In order to reap the full potential of our digital economy, European rules must ultimately enable and encourage our businesses and citizens to buy and sell their services and products anywhere in the European Union.



Quote for the day:


"The problem isn't a shortage of opportunities; it's a lack of perspective." -- Tim Fargo