Daily Tech Digest - August 13, 2018

Google DeepMind's AI can now detect over 50 sight-threatening eye conditions


In a project that began two years ago, DeepMind trained its machine learning algorithms using thousands of historic and fully anonymized eye scans to identify diseases that could lead to sight loss. According to the study, the system can now do so with 94 percent accuracy, and the hope is that it could eventually be used to transform how eye exams are conducted around the world. AI is taking on a number of roles within health care more widely. ... AI is also being used to help emergency call dispatchers in Europe detect heart attack situations. Diagnosing eye diseases from ocular scans is a complex and time-consuming for doctors. Also, an aging global population means eye disease is becoming more prevalent, increasing the burden on healthcare systems. That's providing the opportunity for AI to pitch in. "The number of eye scans we're performing is growing at a pace much faster than human experts are able to interpret them," said Pearse Keane, consultant ophthalmologist at Moorfields, in a statement. "There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients."



How Fintech Is Transforming Access to Finance

A percentage of the digital transactions that merchants receive are set aside to repay their advances. This arrangement keeps repayments fluid, bite-sized, and in line with cash flow. In India, Capital Float, a nonbank finance company, provides instant decisions on collateral-free loans for small entrepreneurs. A risk profile assessment is carried out in real time by analyzing MSMEs’ cash flows using data from Paytm, an e-commerce payment system and digital wallet company, mobile financial services firm Payworld, and smartphones. Capital Float customers carry out electronic know-your-customer authentication, receive the loan offer, confirm acceptance, and sign the loan agreement on a mobile app. The loan amount is credited to their account on the same day, with nil paperwork. Cash flow loans help MSMEs seize opportunities when they arise and are an excellent example of the targeted, niche innovation that enables fintech to compete with more prominent—but slower—traditional banks. They are well-suited to businesses that maintain very high margins, but lack enough hard assets to offer as collateral.


The Commercial HPC Storage Checklist – Item 3 – Protection at Scale


Many HPC storage solutions provide only replication for data protection. Replication protects against media failure within a node by creating two or three additional copies of data on other nodes in the storage cluster. The problem is a replication only model forces the organization to store two or three full additional copies of data. While replication does maintain performance during a failure, the level of exposure to an additional failure is enormous. Most enterprise storage systems support a single or dual parity protection scheme. While parity does not have the capacity waste of a replicated system, it can hurt storage performance if the design of the storage system cannot maintain performance during a failure/rebuild process. A Commercial HPC storage system needs to provide a parity-based protection scheme, so they do not waste capacity nor unnecessarily waste data center floor space. Because restarting of workloads is so time-consuming it also needs to have multiple layers of redundancy so that one or two drive failures don’t stop an HPC process from executing.


How artificial intelligence is shaping our future


A world fuelled and enhanced by AI is one to look forward to. Autonomous cars will mean efficient and safe transport. Real-time translation buds that will enable you to speak one language and hear another will transform our travel experiences. Despite the cries of alarmists, there is little reason to believe that our AIs are going to “wake up” and decide to do away with us. ... New drugs, therapies and treatments will produce a revolution in the delivery of healthcare. What’s true for health is true for education, leisure, finance and travel. Every aspect of how individuals, corporations and governments function can be more effectively managed with the right application of the right data. ... Humans will come to confide, trust and rely on our new companions. They will support us for better or worse, in our prime and our decline. Powered by AI and abundant data, they may assume the characteristics of those dear or near to you. Imagine your late grandmother or your favourite rock star chatting helpfully in your living room.


Microsoft may soon add multi-session remote access to Windows 10 Enterprise

windows server
At this point, multi-session Remote Desktop Services (RDS) is a Windows Server-only feature, one that lets users run applications hosted on servers, whether the servers are on-premises or cloud-based. But the evidence uncovered by Alhonen hints that Microsoft will expand a form of RDS to Windows 10. "There's a ton of unanswered questions," said Wes Miller, an analyst at Directions on Microsoft, noting Microsoft silence on such a move. He expected that some answers will be revealed at Microsoft Ignite, the company's massive conference for IT professionals that's set for Sept. 24-28, or with the release of Windows 10 1809 this fall. One thing he's sure of, however. "You won't see this running on hardware at a user's desktop," Miller said of Windows 10 Enterprise for Remote Sessions. Instead, he believes the SKU should be viewed as back-end infrastructure that will be installed at server farms in the virtual machines that populate those systems. If Windows Server serves - no pun intended - as the destination for remote sessions accessing applications or even desktops, why would Microsoft dilute the market with the presumably-less-expensive Windows 10 Enterprise SKU?


Will network management functions as a service arrive soon?

The cloud also eliminates the need for patching and upgrading software. Those functions would be handled by the vendor. In considering NMaaS, Laliberte said organizations should understand the underlying architectures, which in some cases could simply be individual licenses. "After that, it would come down to the cost model of Opex versus Capex, along with maintenance," he said. Laliberte said it is important to find out how the NMaaS offering charges and to determine the cost model and whether there any ingress charges for data collected. One of the other big issues, he added, is security. "If you are in a regulated industry or have sensitive information traversing your network and that data is being sent to the cloud, make sure to get the security team engaged and that they approve the model." NMaaS also enables the collection and dissemination of benchmarking data, which companies can use to determine how their networks compare to those of their peers. "It is a capability that could be very helpful for organizations to understand and to improve their own environment," Laliberte said.


Apcela optimizes Office 365 performance, improving user productivity

Apcela optimizes Office 365 performance
The architecture of the network Apcela has built follows the model of Network as a Service. It starts with a core network anchored on globally distributed carrier-neutral commercial data centers such as Equinix, which Apcela calls application hubs, or AppHUBs. These data centers are then connected with high capacity, low latency links. That high-performance core then interconnects to the network edge, which can be enterprise locations such as branches, manufacturing facilities, regional headquarters, data centers, and so on. This core network also interconnects with the cloud, connecting to the public internet, or directly peering with cloud data centers, such those operated by Microsoft where the vendor hosts Office 365.  ... A full security stack is also deployed to these commercial data centers. By distributing security and moving it out of the enterprise data center and into these distributed network nodes, a branch office simply goes to the nearest AppHUB to clear security there, and from there, it can go to the internet or to whatever SaaS applications these branches need to use, rather than having to go all the way back through the enterprise data center before they get out to the cloud.


8 guidelines to help ensure success with robotic process automation

The first step is to find out what really goes on day-to-day in your organization. It is very surprising how many variants of processing can build up. Use process mining, process discovery tools or consultants to figure out what you actually do in a process. Methods to do so might include extracting systems logs, or mouse clicks and keystrokes to find out how many ways an activity can happen and then eliminate the less optimal ways to automate the most common paths. Many different tools can be used to support automation, especially ones that have best practice processes already in them. Filtering to see if RPA should be used needs to start with understanding the process in order to then understand the choices of automation available in the short, medium and longer term. If people don’t know what they do or how they do it, they’re not ready to start with RPA. Standardized, repetitive, re-keying tasks of digital data is the optimal place to start thinking if RPA makes sense or not.


How Smaller Financial Services Firms Can Win With Open-Banking Disruption

financial services, disruption, open banking
Even though only nine of Europe’s largest banks are required to comply with PSD2, many small and midsize financial services – as well as their much-larger rivals – are warming to the idea of opening up their customers’ transactional data. Banks and insurers like the idea of using this information to propose more compelling lending options, credit lines, and investment services to their customers. But more importantly, their customers will have the power to dictate how their information is exchanged with other institutions to find the best way to manage their financial growth. According to Oxford Economics study “The Transformation Imperative for Small and Midsize Financial Services Firms,” sponsored by SAP, small and midsize banks and insurers seem to be on the right digital path towards open banking. Surveyed participants indicated that they are heavily investing in efficient, scalable, and connected technology that can help keep their data and systems more secure and support innovation. 


The Ethics of Security


The usual Black Mirror-style thought experiment (admittedly one not used in the 18th century) is to imagine you kindly drop by to visit a friend in hospital. On walking through the door, their new Benthamometer detects your healthy heart, lungs, liver and kidneys could save the lives of 5 sick people inside and your low social media friend count suggests few folk would miss you. Statistically, sacrificing you to save those 5 more popular patients is not only OK, it is morally imperative! It is the extreme edge cases of a utilitarian or statistical approach that are often the cause of algorithmic unfairness. If the target KPIs are met, then by definition the algorithm must be good, even if a few people do suffer a bit. No omelettes without some broken eggs! If you think this would never happen in reality, we only need to look at the use of algorithmically generated drone kill lists by the US government in the Yemen. Journalist and human rights lawyer Cori Crider has revealed that thousands of apparently innocent people have been killed by America’s utilitarian approach to acceptable civilian casualties in a country they are supposed to be helping.



Quote for the day:


"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis


Daily Tech Digest - August 12, 2018

Honestly, you should already be incorporating purpose into your company culture, regardless of technology. But the fact remains that making time to define that purpose—just like making time to define your business goals—will go a long way toward creating a digital culture that isn’t just present, but palpable and valued by customers and employees alike. For instance, when your purpose changes from making quality shoes to providing customers with a memorable shoe-buying/show-wearing experience, your digital culture lens will shift automatically. Suddenly, you’re thinking of ways to use technology in new and exciting ways for the customer, rather than just getting the job done. These are the kinds of shifts that drive digital transformation—and create positive change. Is change easy? No. But when it comes to creating successful digital transformation, the cards are on the table, and they’re incredibly easy to read. Successful transformation requires cultural change. Period. Following the steps above to build a stronger digital culture—however challenging—will help get you there.


React Native JavaScript framework stumbles

Facebook found that initial principles of React Native—serving as a single asynchronous, serializable, and batched bridge between JavaScript and native apps—made it harder to build features. The asynchronous bridge, for example, has meant JavaScript logic could not be integrated directly with native APIs expecting asynchronous answers. And batched bridge-queuing native calls made it more difficult to have React Native apps call into functions implemented natively. ... Not everyone is waiting for Facebook to work out the kinks in React Native. Walmart Labs has built an open source platform, Electrode Native, for integrating React Native components into existing mobile applications. Running on Node.js 6 or later, Electrode Native lets developers select features to add to an application and packages them in a single library. Built-in dependency version control is included to control native dependencies for alignment to React Native components.


When Digital Innovation Becomes A Factory Of Business Outcomes

innovation, digital innovation, digital transformation, agilityTo claim a business and market leadership position, periodic or one-time innovation is not enough. Continuous innovation with the latest technologies is imperative to stay current with the processes and experiences that employees and customers demand. In some cases, global executives are committed to innovation but not satisfied with their innovation performance. This reality serves as an excellent reminder that a self-correcting structure, which includes measurement and the flexibility to change, is required to identify and implement innovations. Considering the low satisfaction among executives when it comes to innovation, a factory setup – which includes defined processes, connected systems, and one source of truth – emerges as the ideal self-correcting model. This factory approach inherently enables the repeatability, structure, measurability, root-cause visibility, continuous improvement, and cost optimization that businesses need to succeed.


The cost of a payment card data breach

The financial costs for breached organisations vary based on several factors. The size of the breach is the most important, but the affected payment channel, the number of servers and how those servers are interconnected also play an important role. There is also the cost of the breach response. Organisations will need to notify their regulator and affected parties. If they have planned for a breach, this will be less expensive than having to do it on the fly. The same is true if they intend to create a help page on their website and a phone line for customers to use to learn more about the incident. Organisations will probably also be required to cover the cost of a forensic investigation. A PCI Level 2 investigation will cost about £25,000–£50,000, and a Level 1 investigation will cost upwards of £100,000. Depending on the investigation’s findings, organisations might face tough disciplinary action. Fines for non-compliance are levied on the payment processers or card companies rather than the breached organisation.


How Payroll AI and Machine Learning Are Transforming Businesses

AI machine learning for payroll services
A number of problems surfaced even among companies that had felt like they had fully developed their payroll management solutions. One of the biggest problems was with tracking employee expenses. Only 33% and 21% of companies tracked domestic and global employee expenses, respectively. Many companies are just beginning to realize the imperfections in their payroll management processes. They are beginning to invest in new AI technology that can help them address these problems. ... Machine learning has helped payroll managers develop more efficient processes for handling these queries. One of the most common ways that they can improve interactivity and streamline customer service is by tracking the questions and responses between customers and payroll managers. After observing a pattern, they can help customer service representatives develop automated responses to the most frequent inquiries.


From finance to healthcare: 5 fintech trends that will benefit digital health products


While finance tracks money, the fitness industry is driving advances in biometric wearables that monitor and record steps and active minutes, heart rate, calories burned, even sleep. The relationship between tracking fitness and larger health issues has always been implicit, and the traction and success of personal performance products provides a bridge to more dedicated health technology. Today, the self-generated tracking and performance trend is driving a proliferation of health-related apps that encourage and empower customers to proactively engage with their wellness. From cardio trackers, calorie counters, ovulation and menstruation apps, and pregnancy trackers, developments in more sophisticated clinical solutions are leading to technology that combines with wearables and mobile apps to monitor chronic conditions, including cardiovascular disease, diabetes, stress and mental health. And with advances in biometric accuracy, data analysis, machine learning, and AI algorithms, products and applications that tap this high value data is forecast to proliferate.


IoT Innovation in Transportation

There has been continuous growth in the field of transportation over the past decade because of the crucial role of logistics in transport through air, water, or land. Managers or transportation heads have started searching for ways to improve profits and logistics by minimizing the costs associated with the project. While managers work on profits and ways to improve the transportation process, corporations have started to look towards big data being generated in this field and can point towards ways to improve transportation. IoT interpolates perfectly with transportation, and there is a huge scope where the logistics can be modified to gather data on each step throughout the transportation process, which will give the managers a clear idea of how transport can be further improved. Trucks or any other type of transport facility can now be connected through IoT, which can provide the location of the vehicle, movement, speed, and estimated time of delivery through the exchange of data. This was not at all possible previously, but, now, it is due to the introduction of IoT in this field.


What is continuous integration (CI): Faster, better software development

What is continuous integration (CI): Faster, better software development
Continuous integration is a development philosophy backed by process mechanics and software build automation. When practicing CI, developers commit their code into the version-control repository frequently and most teams have a minimal standard of committing code at least daily. The rationale behind this is that it’s easier to identify defects and other software quality issues on smaller code differentials rather than larger ones developed over extensive period of times. In addition, when developers work on shorter commit cycles, it is less likely for multiple developers to be editing the same code and requiring a merge when committing. Teams implementing continuous integration often start with version-control configuration and practice definitions. Even though checking in code is done frequently, features and fixes are implemented on both short and longer time frames. Development teams practicing continuous integration use different techniques to control what features and code is ready for production.


Race Condition vs. Data Race in Java

A race condition is a property of an algorithm (or a program, system, etc.) that is manifested in displaying anomalous outcomes or behavior because of the unfortunate ordering (or relative timing) of events. A data race is the property of an execution of a program. According to the Java Memory Model (JMM), an execution is said to contain a data race if it contains at least two conflicting accesses (reads of or writes to the same variable) that are not ordered by a happens-before (HB) relationship (two accesses to the same variable are said to be conflicting if at least one of the accesses is a write). This definition can probably be generalized by saying that an execution contains a data race if it contains at least two conflicting accesses that are not properly coordinated (a.k.a synchronized), but I am going to talk about data races as they are defined by the JMM. And, unfortunately, the above definition has a significant flaw. ... Despite the incorrect definition stated by JMM that remains unchanged, I am going to use a fixed version.


Multiple Solutions Hardens Posture but Creates Agent and Alert Fatigue

CISOs agree that prevention is faulty, but investigation is a burden. EDR capabilities can provide improved detection and response approaches to prolific security incidents, and using automation can help to address the global shortage of cybersecurity professionals. Specifically, EDR tools best fit resource-strapped businesses with lean IT teams that operate without a Security Operation Center (SOC). However, half of IT executives worldwide said that managing EDR tools is difficult or very difficult. In both the US and UK, 49 percent of all endpoint alerts triggered by monitoring and response techniques turned out to be false alarms. Sixty-four percent of Americans in companies with no SOC said monitoring activities are one of their toughest challenges. Spotting an ongoing breach also means fighting alert fatigue caused by noisy traditional security solutions. It’s a race against time when filtering security alerts, which can be especially difficult if the organization is understaffed and overburdened.



Quote for the day:

"Tact is the ability to tell someone to go to hell in such a way that they look forward to the trip." -- Winston S. Churchill

Daily Tech Digest - August 11, 2018

artificial-neural-network-nodes-connected-togetherMost scientists would probably agree that prediction and understanding are not the same thing. The reason lies in the origin myth of physics—and arguably, that of modern science as a whole. For more than a millennium, the story goes, people used methods handed down by the Greco-Roman mathematician Ptolemy to predict how the planets moved across the sky. Ptolemy didn’t know anything about the theory of gravity or even that the sun was at the centre of the solar system. His methods involved arcane computations using circles within circles within circles. While they predicted planetary motion rather well, there was no understanding of why these methods worked, and why planets ought to follow such complicated rules. Then came Copernicus, Galileo, Kepler and Newton. Newton discovered the fundamental differential equations that govern the motion of every planet. The same differential equations could be used to describe every planet in the solar system. This was clearly good, because now we understood why planets move.


3 Trends in Organization Design Presenting Opportunities for Leaders

Organization design
Today, nearly every business has digitized to some extent. Some companies—for example, Uber and Amazon—have used digital solutions to create business models that would have been unimaginable in the 1980s. While not every business needs to be digitized to the same extent as Uber, nearly every business can benefit from exploring the use of artificial intelligence, data and analytics, and other technology to improve capabilities and results not just incrementally but exponentially. Capitalizing on these potentials, however, does require strong leadership and a willingness to change and adapt. You can’t just plug a new technology into an old framework without affecting other aspects of the organization, such as how work is done, how the structure is designed, how metrics are used to drive performance, what skills and talent are needed, and how culture will reinforce strategy. ... Agile is another organization design trend that has its roots in the digital world. It is a way of working that enables a company to respond more quickly to changes in the marketplace, and it can result in a more nimble, resilient organization.


Are You Spending Way Too Much on Software?

Companies are allowing their data to get too complex by independently acquiring or building applications. Each of these applications has thousands to hundreds of thousands of distinctions built into it. For example, every table, column, and other element is another distinction that somebody writing code or somebody looking at screens or reading reports has to know. In a big company, this can add up to millions of distinctions. But in every company I’ve ever studied, there are only a few hundred key concepts and relationships that the entire business runs on. Once you understand that, you realize all of these millions of distinctions are just slight variations of those few hundred important things. In fact, you discover that many of the slight variations aren’t variations at all. They’re really the same things with different names, different structures, or different labels. So it’s desirable to describe those few hundred concepts and relationships in the form of a declarative model that small amounts of code refer to again and again.


How do data companies get our data?

How do data companies get our data?
Research has shown that more than three in four Android apps contain at least on third-party tracker. Third-party app analytics companies plan a crucial role for advertisers and app developers. Though some are used to better understand how users use apps, a vast majority are used for targeted advertising, behavioural analytics, and location tracking. The problem is, that there is no actual opting-out, when it comes to such third-party tracking. In addition to third party trackers embedded in apps, apps themselves frequently access users’ entire address books, location data, photos and more, sometimes even if you have explicitly turned off access to such data. ... Another major source of data for data companies are surveys – this was at the heart of the 2018 Cambridge Analytica scandal. This includes things such as personality quizzes, online games and tests, and more. When a company asks you to rate a product, your opinion may benefit many other companies. The data company Epsilon for instance has created a database called Shopper’s Voice boasting “unique insights you won’t find anywhere else, directly from tens of millions of consumers.


Banking Giant ING Is Quietly Becoming a Serious Blockchain Innovator

ing, bank
ING is out to prove that startups aren't the only ones that can advance blockchain cryptography. Rather than waiting on the sidelines for innovation to arrive, the Netherlands-based bank is diving headlong into a problem that it turns out worries financial institutions as much as average cryptocurrency users. In fact, the bank first made a splash in November of last year by modifying an area of cryptography known as zero-knowledge proofs. Simply put, the code allows someone to prove that they have knowledge of a secret without revealing the secret itself. On their own, zero-knowledge proofs were a promising tool for financial institutions that were intrigued by the benefits of shared ledgers but wary of revealing too much data to their competitors. The technique, previously applied in the cryptocurrency world by zcash, offered banks a way to transfer assets on these networks without tipping their hands or compromising client confidentiality. But ING has came up with a modified version called "zero-knowledge range proofs," which can prove that a number is within a certain range without revealing exactly what that number is. 


What is data wrangling and how can you leverage it for your business?

data wrangling
Regardless of how unexciting the process of data wrangling might be, it’s still critical because it makes your data useful. Properly wrangled data can provide value through analysis or be fed into a collaboration and workflow tool to drive downstream action once it’s been conformed to the target form. Conformance or transforming disparate data elements into the same format also addresses the problem of siloed data. Siloed data assets cannot “talk” to each other without translating data elements between the different formats, which is often time or cost prohibitive. Another benefit of data wrangling is that it can be organized into a standardized and repeatable process that moves and transforms data sources into a common format, which can be reused multiple times. Once your data has been conformed to a standard format, you’re in a position to do some very valuable, cross-data set analytics. Conformance is even more valuable when multiple data sources are wrangled into the same format.


Digital transformation and the law of small numbers

digital transformation
Across industries, there is more downbeat news on digital transformation. A recent study by consulting firm Capgemini and the MIT Center for Digital Business concludes that organizations are struggling to convert their digital investments into business successes. The reasons are illuminating and many: lack of digital leadership skills, and a lack of alignment between IT and business, to name a couple. The study goes on to suggest that companies have underestimated the challenge of digital transformation and that organizations have done a poor job of engaging employees across the enterprise in the digital transformation journey. These findings may sound surprising to technology vendors, all of whom have gone “digital” in anticipation of big rewards from the digital bonanza (at least one global consulting firm has gone so far as to tie senior executive compensation to “digital” revenues). Anecdotally, “digital” revenues are still under 30 percent of total revenues for most technology firms, which further corroborates the findings of market studies on the state of digital transformation.


Containers Are Eating the World

The container delivery workflow is fundamentally different. Dev and ops collaborate to create a single container image, composed of different layers. These layers start with the OS, then add dependencies (each in its own layer), and finally the application artifacts. More important, container images are treated by the software delivery process as immutable images: any change to the underlying software requires a rebuild of the entire container image. Container technology, and Docker images, have made this far more practical than earlier approaches such as VM image construction by using union file systems to compose a base OS image with the applications and its dependencies; changes to each layer only require rebuilding that layer. This makes each container image rebuild far cheaper than recreating a full VM image. In addition, well-architected containers only run one foreground process, which dovetails well with the practice of decomposing an application into well-factored pieces, often referred to as microservices.



How to build a layered approach to security in microservices


Microservices that need addresses across multiple applications make address-based security more complicated. For a different approach, you can group applications that share microservices into a common cluster, based on a common private IP address. Through this approach, all the components within the cluster are capable of addressing each other, but you will still need to expose them for communications outside that private network. If a microservice is broadly used across many applications, you should host it in its own cluster, and its address should be exposed to the enterprise virtual private network or the internet, depending on its scope. Network-based security reduces the chances of an intruder accessing a microservice API, but it won't protect against intrusions launched from within the private network. A Trojan or other hacked application could still gain access at the network level, so you may need to add another another level of security in microservices. This is the access control level. Access control relies on the microservice recognizing that a request is from an authentic source.


WhiteSource Launches Free Open Source Vulnerability Checking

After completing a scan of the user's requested libraries, the Vulnerability Checker shows all vulnerabilities detected in the software and the path, indicating which library includes which vulnerability. We also show the CVSS 3.0 score, provide links to references and even supply the suggested fix per the open source community. In the WhiteSource full platform we further provide information regarding whether you are actually making calls to the vulnerable functionality and a full trace analysis to provide insights for faster and quicker remediation for all known vulnerabilities (not just the top fifty from the previous month. WhiteSource automates the entire process of open source components management from the selection process, through the approval process and finding and fixing vulnerabilities in real-time. It is a SaaS offering priced annually per contributing developers, meaning the number of developers working on the relevant applications. We offer our full platform services free of charge for open source projects.



Quote for the day:

"Your excuses are nothing more than the lies your fears have sold you." -- Robin Sharma

Daily Tech Digest - August 10, 2018


Headline breakthroughs in AI have come fast and furious in recent years, fuelled by the rapid maturing of techniques using deep learning, the success of GPUs at accelerating these compute-hungry tasks, and the availability of open-source libraries like TensorFlow, Caffe, Theano and PyTorch. This has accelerated innovation and experimentation, leading to impressive new products and services from large tech vendors like Google, Facebook, Apple, Microsoft, Uber and Tesla. However, I predict that these emerging AI technologies will be very slow to penetrate other industries. A handful of massive consumer tech companies already have the infrastructure in place to make use of the mountains of data they have access to, but the fact is that most other organisations don’t – and won’t for a while yet. There are two core hurdles to widespread adoption of AI: engineering big data management, and engineering AI pipelines. ... AI engineering competency is the next hurdle – and it’s likely to be many years yet before it becomes widespread across industries beyond the tech giants.


Enterprises should be able to sell their excess internet capacity

Enterprises should be able to sell their excess internet capacity
The idea is that those with excess data capacity, such as a well-provisioned office or data center, which may not be using all of its throughput capacity all of the time — such as during the weekend — allocates that spare bandwidth to Dove’s network. Passing-by data-users, such as Internet of Things-based sensors or an individual going about business, would then grab the data it, he, or she needs; payment is then handled seamlessly through blockchain smart contracts. “The Dove application will find the closest Dove-powered hotspot or peer node, negotiate the package deal, and connect automatically,” the company says in a white paper. Dove Network says it intends to supply a 500-yard-plus-range, blockchain-based wireless router to vendors. It’s also talking about longer-range access points in the future. Both solutions will allow relatively few organizations to sign up, yet still blanket urban areas with hotspots, it says. Dove Network further says on its website that it believes internet infrastructure is broken. It reckons half of the world is not connected to the internet, yet 35 percent of paid-for data is never used.


Can SNMP (Still) Be Used to Detect DDoS Attacks?


Polling from the cloud every five seconds might not be the way one wants to build its attack detection. And even if one does, it is limited to detecting attacks where the smallest burst is no longer than 10 seconds. What to do when the burst is six seconds, or less? The SNMP polling method simply does not scale for the detection of burst attacks and we need to move away from pulling analytics to real-time, event-based methods. On-box RMON rules with threshold detection, generating SNMP traps, provides one alternative without introducing new technologies or protocols. However, what is possible in terms of detections and triggers for SNMP traps will depend on the capabilities of your device. That said, most network equipment manufacturers provide performance management and streaming analytics that by far exceed the possibilities of SNMP. Now would be a good time to look at those alternatives and implement an on- or off-box automation for attack detection and trigger traffic redirection through API calls to the cloud service.


Hairy artificial skin gives robots a sense of touch

hairy-robot-uta-image.jpg
The smart skin includes nanowire sensors made from zinc oxide (ZnO). They are much thinner than human hair (0.2 microns, while hair is around 40 microns), and when they brush against something, they can sense temperature changes and surface variations. These nanowires are covered in a protective coating that makes them resistant to chemicals, extreme temperatures, moisture, and shock, so they can be used in harsh environments. The nanowires and protective coating are bundled together into one sheet of pressure sensing "skin" that can be draped over a robot, so existing robots such as a fleet of industrial arms at a manufacturing plant could be retrofitted with a new sense of touch. While the image of hairy robots is endearing, the skin actually just looks like a sheet of plastic with patches of sensors. The "hairs" are so small that you can't feel them, and they can only be seen under a microscope. The researchers describe their smart skin in a paper that published in IEEE Sensors Journal in 2015, and they have now received a patent for their technology. We asked the lead researcher Zeynep Çelik-Butler how this stands out from other smart skin technologies.


Data veracity challenge puts spotlight on trust

This data veracity challenge is one that most businesses have yet to come to grips with. In our Technology Vision for Oracle 2018, 79 percent of the business executives we spoke with agreed that organizations are basing their most critical systems and strategies on data – yet many have not invested in the capabilities to verify the truth within it. If we’re to fully harness data for the full benefit to businesses and society, then this challenge needs to be addressed head on. In the past year the company unveiled its Autonomous Database, which further maintains data purity by – as the name implies – offering total automation and thereby vastly reducing human error. Steps like these are critical, as data services and websites rely on DaaS to properly analyze their data and provide holistic views of customers. To address the data veracity challenge, businesses should focus on three tenets to build confidence: 1) provenance, or verifying the history of data from its origin throughout its life cycle; 2) context, or considering the circumstances around its use; and 3) integrity, or securing and maintaining data.


Numerous OpenEMR Security Flaws Found; Most Patched

Numerous OpenEMR Security Flaws Found; Most Patched
The OpenEMR community "is very thankful to Project Insecurity for their report, which led to an improvement in OpenEMR's security," Brady Miller, OpenEMR project administrator, tells ISMG. "Responsible security vulnerability reporting is an invaluable asset for OpenEMR and all open source projects. The OpenEMR community takes security seriously and considered this vulnerability report a high priority since one of the reported vulnerabilities did not require authentication," Miller says. "A patch was promptly released and announced to the community. Additionally, all downstream packages and cloud offerings were patched." So, what's been fixed? "The key vulnerability in this report is the patient portal authentication bypass, which essentially allows a bad actor to bypass authentication and gain access to OpenEMR - if the patient portal is turned on," Miller says. "All the other vulnerabilities require authentication." The patient portal authentication bypass, multiple instances of SQL injection, unrestricted file upload, remote code execution and arbitrary file actions vulnerabilities "were all fixed," he says.


What can the enterprise learn from the connected home?


The main driver for enterprise IoT is that the large volumes of data created by connected devices present a huge opportunity. By leveraging the power of analytics – either on a small scale or across large deployments – businesses can gain additional layers of insight into their operations and make improvements. This is exactly what the smart home enables. By using connected products to track energy usage, for example, consumers can learn where they are spending the most money and become more cost-efficient. However, from an enterprise perspective, the challenge comes in being able to efficiently manage and control hundreds or potentially thousands of smart devices. Simply keeping track of the vast swathes of data being generated from devices in a range of different locations and from an assortment of vendors, is already a serious issue and is likely to be the biggest IoT challenge IT departments will face in the future. What they don’t want is to have several platforms pulling in different data streams. Not only would this be hugely confusing to manage, the lack of coordination would create a fragmented picture of what is going on across the business.


How API-based integration dissolves SaaS connectivity limits


API integration supports multichannel experiences that improve customer engagement. An example is how integration helps businesses partner with other service providers to offer new capabilities. An example is an API model that makes Uber services available on a United Airlines application. APIs also spur revenue growth. For instance, a business's IP [intellectual property] that lies behind firewalls can be exposed as an API to create new revenue channels. Many new-age companies, such as Airbnb and Lyft, leverage the API model to deliver revenue. Traditional companies [in] manufacturing and other [industries] are really applying this to their domain. API-first design provides modernized back-end interfaces that speed integrations. Doing back-end integrations? You can run the APIs within the data center to integrate SaaS and on-premises applications. A good API, a well-designed API can actually reduce the cost of integration by 50%.


Serverless Still Requires Infrastructure Management


Even though the servers are gone from the serverless picture, this doesn’t mean you can forget about infrastructure configuration altogether. Rather than configuring compute instances and many network related resources, which was commonplace for the traditional IaaS stack, we now need to configure functions, storage buckets or/and tables, APIs, messaging queues/topics and many additional resources to keep everything secured and monitored. When it comes to infrastructure management, serverless architectures usually require more resources to be managed due to the fine-grained nature of serverless stacks. At the same time, without servers in sight, infrastructure configuration can be done as a single stage activity, in contrast with the need to manage IaaS infrastructure separately from the software artifacts running on different kinds of servers. Even with this somewhat simplified way of managing infrastructure resources one still needs to use specialised tools for defining and applying infrastructure stack configurations. Cloud platform providers offer their proprietary solutions in this area.


5 ways machine learning makes life harder for cybersecurity pros

Machine learning is a form of AI that interprets massive amounts of data, applying algorithms to the material, and making predictions off its observations. Common technologies that employ machine learning include facial recognition, speech recognition, translation services, and object recognition. Businesses typically use machine learning for locating and processing large data sets that no human could sort through in a timely manner, if at all. Major companies like Amazon, IBM, Google, and Microsoft use machine learning to improve business functionality. But some organizations are implementing machine learning for more a narrow purpose: Cybersecurity. While many assume machine learning makes cybersecurity professionals' lives much easier by better tracking security issues, that's not necessarily the case. Just like any new technology, machine learning still has its flaws—problems that turn the tech into more of a headache than a helping hand in the security space



Quote for the day:


"Making those around you feel invisible is the opposite of leadership." -- Margaret Heffernan


Daily Tech Digest - August 09, 2018

Where low-code development works — and where it doesn’t
In any organization, you will find two kinds of processes: those that are structured and those that are more open-ended. Structured processes, which are typically followed rigorously, account for roughly two-thirds of all operations at an organization. These are generally the “life support” functions of any company or large group—things like leave management, attendance, and procurement. ... To avoid chaos, this workflow should remain consistent from week to week, and even quarter to quarter. Given the clear structure and obvious objectives, these processes can be handled nicely by a low-code solution. But open-ended processes are not so easy to define, and the goals aren’t always as clear. Imagine hosting a one-time event. You may know a little about what the end result should look like, but you can’t predefine the planning process because you don’t orchestrate these events all the time. These undefined processes, like setting an agenda for an offsite meeting, tend to be much more collaborative, and they often evolve organically as inputs from multiple stakeholders shape the space.



Adopt these continuous delivery principles organization-wide


Upper management should advocate for continuous delivery principles and enforce best practices. Once an organization has set up strong CD pipelines and reaps the benefits, resist any efforts to succumb to older, less automated deployment models just because of team conflicts or a lack of oversight. If a group must work closely together but cannot agree on continuous delivery practices, it's critical that upper management understands CD and its importance to software delivery, pushing the continuous agenda forward and encouraging adoption. Regulation is rarely considered a driver of innovation, so before your team adopts continuous delivery practices, understand any regulatory requirements the organization is under. No one wants to put together a CI/CD pipeline then have the legal department shut it down. An auditor needs to be informed about and understand, for example, the automated testing procedure in a continuous delivery pipeline. And the simple fact that it's repeatable does not mean a process adheres to the regulatory rules.


Incomplete visibility a top security failing


While many security teams implement good basic protections around administrative privileges, the report said these low-hanging-fruit controls should be in place at more organisations, with 31% of organisations still not requiring default passwords to be changed, and 41% still not using multifactor authentication for accessing administrative accounts. Organisations can start to build up cyber hygiene by following established best practices such as the Critical Security Controls, a prioritised set of steps maintained by the CIS. Although there are 20 controls, the report said implementing just the top six establishes what CIS calls “cyber hygiene.” “Industry standards are one way to leverage the broader community, which is important with the resource constraints that most organisations experience,” said Tim Erlin, vice-president of product management and strategy at Tripwire. “It’s surprising that so many respondents aren’t using established frameworks to provide a baseline for measuring their security posture. It’s vital to get a clear picture of where you are so you can plan a path forward.”


Political Play: Indicting Other Nations' Hackers

While it's impossible to gain a complete view of these operations, FireEye suggested that they were being run much more carefully. For example, one ongoing campaign appeared to target U.S. engineering and maritime targets, and especially those connected to South China Sea issues. "From what we observed, Chinese state actors can gain access to most firms when they need to," Bryce Boland, CTO for Asia-Pacific at FireEye, told South China Morning Post in April. "It's a matter of when they choose to and also whether or not they steal the information that is within the agreement." Now, of course, the U.S. appears to be trying to bring diplomatic pressure to bear on Russia as U.S. intelligence leaders warn that Moscow's election-interference campaigns have not diminished at all since 2016. "We have been clear in our assessments of Russian meddling in the 2016 election and their ongoing, pervasive efforts to undermine our democracy," Director of National Intelligence Dan Coats said last month


RESTful Architecture 101


When deployed correctly, it provides a uniform, interoperability, between different applications on the internet. The term stateless is a crucial piece to this as it allows applications to communicate agnostically. A RESTful API service is exposed through a Uniform Resource Locator (URL). This logical name separates the identity of the resource from what is accepted or returned. The URL scheme is defined in RFC 1738, which can be found here. A RESTful URL must have the capability of being created, requested, updated, or deleted. This sequence of actions is commonly referred to as CRUD. To request and retrieve the resource, a client would issue a Hypertext Transfer Protocol (HTTP) GET request. This is the most common request and is executed every time you type a URL into a browser and hit return, select a bookmark, or click through an anchor reference link. ... An important aspect of a RESTful request is that each request contains enough state to answer the request. This allows for visibility and statelessness on the server, desirable properties for scaling systems up, and identifying what requests are being made.


Oracle's Database Service Offerings Could Be Its Last Best Hope For Cloud Success

Views of Oracle Corp. Headquarters Ahead Of Earnings Data
All that said, if Oracle could adjust, it has the advantage of having a foothold inside the enterprise. It also claims a painless transition from on-prem Oracle database to its database cloud service, which if a company is considering moving to the cloud could be attractive. There is also the autonomous aspect of its cloud database offerings, which promises to be self-tuning, self-healing with automated maintenance and updates and very little downtime. Carl Olofson, an analyst with IDC who covers the database market sees Oracle’s database service offerings as critical to its cloud aspirations, but expects business could move slowly here. “Certainly, this development (Oracle’s database offerings) looms large for those whose core systems run on Oracle Database, but there are other factors to consider, including any planned or active investment in SaaS on other cloud platforms, the overall future database strategy, the complexity of moving operations from the datacenter to the cloud


Enterprise IT struggles with DevOps for mainframe


"At companies with core back-end mainframe systems, there are monolithic apps -- sometimes 30 to 40 years old -- operated with tribal knowledge," said Ramesh Ganapathy, assistant vice president of DevOps for Mphasis, a consulting firm in New York whose clients include large banks. "Distributed systems, where new developers work in an Agile manner, consume data from the mainframe. And, ultimately, these companies aren't able to reduce their time to market with new applications." Velocity, flexibility and ephemeral apps have become the norm in distributed systems, while mainframe environments remain their polar opposite: stalwart platforms with unmatched reliability, but not designed for rapid change. The obvious answer would be a migration off the mainframe, but it's not quite so simple. "It depends on the client appetite for risk, and affordability also matters," Ganapathy said. "Not all apps can be modernized -- at least, not quickly; any legacy mainframe modernization will go on for years."


Mitigating Cascading Failure at Lyft


Cascading failure is one of the primary causes of unavailability in high throughput distributed systems. Over the past four years, Lyft has transitioned from a monolithic architecture to hundreds of microservices. As the number of microservices grew, so did the number of outages due to cascading failure or accidental internal denial of service. Today, these failure scenarios are largely a solved problem within the Lyft infrastructure. Every service deployed at Lyft gets throughput and concurrency protection automatically. With some targeted configuration changes to our most critical services, there has been a 95% reduction in load-based incidents that impact the user experience. Before we examine specific failure scenarios and the corresponding protection mechanisms, let's first understand how network defense is deployed at Lyft. Envoy is a proxy that originated at Lyft and was later open-sourced and donated to the Cloud Native Computing Foundation. What separates Envoy from many other load balancing solutions is that it was designed to be deployed in a "mesh" configuration.


Beyond GDPR: ePrivacy could have an even greater impact on mobile


Metadata can be used in privacy-protective ways to develop innovative services that deliver new societal benefits, such as public transport improvements and traffic congestion management. In many cases, pseudonymisation can be applied to metadata to protect the privacy rights of individuals, while also delivering societal benefits. Pseudonymisation of data means replacing any identifying characteristics of data with a pseudonym, or, in other words, a value which does not allow the data subject to be directly identified. The processing of pseudonymised metadata can enable a wide range of smart city applications. For example, during a snow storm, city governments can work with mobile networks to notify connected car owners to remove their cars from a snowplough path. Using pseudonyms, the mobile network can notify owners to move their cars from a street identified by the city, without the city ever knowing the car owners’ identities.


Should we add bugs to software to put off attackers?

software security add bugs
The effectiveness of the scheme also hinges on making the bugs non-exploitable but realistic (indistinguishable from “real” ones). For the moment, the researchers have chosen to concentrate their research on the first requirement. The researchers have developed two strategies for ensuring non-exploitability and used them to automatically add thousands of non-exploitable stack- and heap-based overflow bugs to real-world software such as nginx, libFLAC and file. “We show that the functionality of the software is not harmed and demonstrate that our bugs look exploitable to current triage tools,” they noted. Checking whether a bug can be exploited and actually writing a working exploit for it is a time-consuming process and currently can’t be automated effectively. Making attackers waste time on non-exploitable bugs should frustrate them and, hopefully, in time, serve as a deterrent. The researchers are the first to point out the limitations of this approach: the aforementioned need for the software to be “ok” with crashing, the fact that they still have to find a way to make theses bugs indistinguishable from those occurring “naturally”



Quote for the day:


"Coaching is unlocking a person's potential to maximize their own performance. It is helping them to learn rather than teaching them." -- John Whitmore


Daily Tech Digest - August 08, 2018

Modern buildingKnowing how the figures work will end up meaning no more than that you will see the writing on the wall more quickly than the average business owner might. But that will mean little if you can’t solve the problem. So how does the huge technological transformation that we are now going through affect the task of running a practice successfully? The obvious answer is, in many and various ways. The app and smartphone combination has completely transformed both the way a firm can get information from clients and the speed with which it can gather that information. Throw in instant messaging and a user base that is increasingly filling up with people who can use both thumbs to tap out replies on their favourite phone – and who do this day in and day out, regardless – and, yes, we are definitely in a different world by comparison with, say, a decade ago. As a firm, if you’re not already taking advantage of this change, well, one worries for you. As Gavin Fell, VP EMEA at Receipt Bank observes, there’s no longer any excuse for clients turning up once a month with a shopping bag full of receipts.


Artificial Intelligence in Singapore: pervasive, powerful and present

BT_20180803_RYAI_3520927.jpg
People and businesses are unanimous in their opinion that AI will impact our daily lives, and that there are productivity gains to be enjoyed through the adoption of this technology. Overall, we can expect to see a spike in the frequency, flexibility and immediacy of data analysis across industries and applications to drive business decisions. One example is the financial services industry in Singapore, which has been at the forefront of developing and adopting AI technologies across functions in their businesses. AI-based automated chat systems that can interact with customers on personal finance queries in real time are now common in several local banking platforms in Singapore. DBS Bank's AI-driven Virtual Assistant handles over 80 per cent of requests on Facebook Messenger accurately without human intervention ... Such services will ultimately improve service delivery, remove the stress and complexity of manual number crunching, and offer insights at greater speed and accuracy to facilitate quicker decision making in an industry where time is money.


Inside the updated Windows Console

windows-consoleapps.jpg
While commands still have many of the same names, and many DOS apps will still run in the Windows console, it's a long way removed from that old text-mode DOS prompt, building on the evolution of the Windows platform. Over the years it's been joined in the Windows console by PowerShell, the default system administration scripting language for Windows and Windows Server, with tools for remote management of both Office 365 and Azure. PowerShell's blue console and color-coded command strings are a long way removed from the old black-and-white DOS window. Its action-oriented command vocabulary is also very different, letting you get and set system settings, building actions into complex scripts that can manage whole fleets of servers. If the Windows command line is a tool for working with a single PC, then PowerShell is a sysadmin's Swiss Army knife for an entire organization full of PCs and servers. Windows 10 recently brought along a third command-line environment — Linux — thanks to the Windows Subsystem for Linux. 


The Galaxy Tab S4 is a great productivity machine precisely because it’s an Android tablet

galaxy tab s4 android beauty
The desktop experience really does feel a lot like Windows. You can resize the windows of DeX-optimized Android apps. You can launch multiple app windows, and Alt-Tab among them. You can drag and drop content between two compatible apps. You can save shortcuts to the desktop. You can right-click to launch contextual menus. You can navigate a taskbar that lets you see previews of open apps on the left, and system tools like Bluetooth, Volume, and Search on the right. ... There are DeX versions of Microsoft Word, Excel, Outlook, PowerPoint, OneNote, OneDrive and Skype. There’s also DeX support for Adobe Acrobat Reader, Photoshop Lightroom, and Photoshop Express (making my job possible on the road). Nine Mail, my preferred app for secure email, has a DeX version too. Of course, so much work today gets done in web browsers and is executed in the cloud (think about all of Google’s apps, let alone Office 365), so you could argue no one even needs apps for 90 percent of the work we do. Still, from a purely psychological, I’m-happy-in-my-comfort-zone perspective, I embrace what DeX delivers.


What to do when IPv4 and IPv6 policies disagree


An obvious takeaway for network and security administrators is that security policies should be more homogeneously applied to both IPv4 and IPv6, and that the enforcement of security policies on both internet protocols should become part of normal operation and management procedures. It is also advisable for sites that don't currently support IPv6 to apply IPv6 packet filtering policies that are similar to those applied to the IPv4 counterparts. This way, when IPv6 is finally deployed on those sites, the servers and other network elements will not be caught off guard. Recent studies have indicated that mismatches between IPv4 and IPv6 security policies are rather common. Network and security administrators must take action to ensure that the policies applied to both protocols are homogeneous. These common mismatches warrant that, when port scanning a site as part of a penetration test, for example, all of the available addresses must be subject to port scans, as the results for different addresses and different internet protocols may differ.


Legal and compliance teams critical to machine learning success

Companies run into problems with ML in a couple of ways. First, and most dangerous, is the failure to involve legal and compliance teams in the formulation of ML projects. With the rapid evolution of privacy regulations, it’s essential for enterprises to ensure they remain compliant. Another common issue is when companies focus on the technology first. Companies often invest millions of dollars and perhaps years developing a machine learning platform, convinced the organization will derive numerous benefits from different departments flocking to take advantage of it. Unsurprisingly, they don’t get the adoption they expect because they didn’t present a successful use case to their internal customers. A third critical mistake organizations make is not understanding the human part of the equation, that is, failing to adequately train the machine learning engine. It’s essential to use an iterative approach to ensure the ML engine is accurate in its analysis or identification. Failure to do this will undoubtedly lead to a high error rate.


Seagate announces new flash drives for hyperscale markets

Seagate announces new flash drives for hyperscale markets
The surprising aspect of the Nytros is they use the SATA interface. SATA is an old interface, a legacy from hard drives, and nowhere near capable of fully utilizing SSD’s performance. For true parallel throughput, you need a PCI Express or M.2 interface, which are designed specifically for the nature of how flash memory works. “People keep expecting SATA to go away, but SATA is lingering. It’s a very easy way of using your bits. It’s simple, it replaces hard disk drives and still give 30 times faster performance with the same security and same management [as PCI Express drives] and gives our portfolio a no-brainer for our customers,” said Tony Afshary, director of product management for SSD storage products at Seagate. But there are also PCI Express drives, and they bring new features to the table, as well. The new Nytro 5000 for hyperscale data centers doubles the read and write performance of the previous model while adding some NVMe features such as SRIOV for virtualization, additional name spaces, and support for multi streams.


How AI and Intelligent Automation Impact HR Practices

Image: Shutterstock
Right now, HR employees are buried in transactional work that involves data entry and simple math calculations. Those types of things can be done faster, cheaper and more accurately using Robotics Processing Automation (RPA). EY started with a brainstorming session that mapped out current processes and identified opportunities for change. "We probably came up with a half a dozen areas that we felt were not good use of human time, but a very good use of robots [such as] onboarding people, reconciliations for benefits, table batching and validation, travel and expenses, [and] learning and administration," said Fiore. For example, each of EY's 13,000 tax practice employees must attend training that results in certification. The certification needs to be validated, which involves notifying employees and managers and making sure the certification is recorded properly. "There's this whole process where people are pushing emails and spreadsheets for all the training that we do," said Fiore. "It's a team of people that are doing that kind of work we can free up."


Raising the Bar for Ethical Cryptocurrency Mining


Cybercriminals over the years have been using third-party scripts to compel people into getting involved in malicious activities without being aware of it. This was typically observed in the case of Texthelp, when cybercriminals injected a Coinhive script into one of Texthelp’s plugins. This made several U.K.-based government websites take part in malicious cryptomining activities unknowingly. For quite some time, we have been discussing malicious cryptomining. By now, you may be hoping to get some information about what an appropriate cryptomining process should be and whether it is really feasible to practice it decently in a predominantly-malicious environment. This is what we would refer to as ethical cryptomining. People engaged in this use their own systems to decipher complex mathematical problems to validate or process cryptocurrency transactions. Interestingly, as cryptocurrency continues to become more popular and its value witnesses a sharp rise, the complexity of the math problems further rises, demanding more CPU/GPU to be harnessed and prompting miners to opt for more high-end graphics cards.


Getting the most from OneNote, part 2: OneNote 10 is catching up

To make your notes easier to organise, you can tag key paragraphs and then search for all notes with a specific tag. OneNote 2016 has a drop-down list of more than 20 tags on the Home tab. You can apply the first nine tags quickly by typing Ctrl-1 through 9, and you can choose Customize Tags to reorder the existing tags — and create your own custom tags, choosing the tag icon and text formatting they apply. You can move those further up the list to give them keyboard shortcuts, and tags you apply will sync to other devices. However, you'll have to right-click on them and add them as custom tags on each new machine. You can also tag something in OneNote as an Outlook task, complete with an Outlook reminder, and monitor it from both applications. OneNote Online has the same long list of tags as OneNote 2016, but they're not customizable (although custom tags you've added to your notes will show up). Currently, OneNote 10 only has the first nine tags from OneNote 2016 — To Do (which doesn't sync to Outlook), Important, Question, Remember for later, Definition, Highlight, Contact, Address and Phone Number.



Quote for the day:

"Be The Kind Of Leader You Would Want To Follow." -- Gordon TredGold