Daily Tech Digest - December 11, 2019

SR-What you need to know-image.png
Segment Routing uses a routing technology or technique known as source packet routing. In source packet routing, the source or ingress router specifies the path a packet will take through the network, rather than the packet being routed hop by hop through the network based upon its destination address. However, source packet routing is not a new concept. In fact, source packet routing has existed for over 20 years. As an example, MPLS is one of the most widely adopted forms of source packet routing, which uses labels to direct packets through a network. In an MPLS network, when a packet arrives at an ingress node an MPLS label is prepended to the packet which determines the packet’s path through the network. While SR and MPLS are similar, in that they are both source-based routing protocols, there are a few differences between them. One of these key differences lies in a primary objective of SR, which is documented in RFC7855, “The SPRING [SR] architecture MUST allow putting the policy state in the packet header and not in the intermediate nodes along the path.


Never Mind Consumers, This Was a Year of Steady Infrastructural Progress

Much of the traction that does not come from exchanges or trading has been generated decidedly in infrastructure layers in 2019. Node infrastructure provider Blockdaemon, having recognized the market’s propensity to proliferate new decentralized networks, is generating revenue across an impressive 22 such networks today and continues to grow month over month. The Graph is serving over 400 public smart contract subgraphs, with request volume clocking millions of daily data queries. Meanwhile, 3Box’s self-sovereign identity and data solution is rapidly integrating across the Ethereum ecosystem, within wallets like MetaMask and many of the new user onboarding solutions, like Portis and Authereum, and even governance experiment MolochDAO.  Blockchain’s road to mainstream adoption depends on institutional backing of businesses that support blockchain infrastructure and enable traditional investors both to capitalize and participate in digital asset networks. As such, the compliance levels of exchanges have been increasing to support institutional clients.


5G and Me: And the Golden Hour


The connected ambulance 5G network slicing concepts were demonstrated at the Mobile World Congress (MWC) in Barcelona, Spain in Feb 2019 by Dell EMC Cork Centre of Excellence (CoE). Network slicing is a type of virtual networking architecture similar to software-defined networking (SDN) and network functions virtualization (NFV) whose goal is software-based network automation. This technology allows the creation of multiple virtual networks on a shared physical infrastructure. ... The goal for the future of connected care in emergencies would be to identify the conditions for Stroke, CHF & MI; measure and score at site, predictively collect Electronic Medical Record (EMR) metadata in conjunction with specific image studies via DICOM (Digital Imaging and Communications in Medicine) and combine this with the metadata from disease-specific epidemiological studies for that geographic region — all within the “golden hour”. This combinatorial analysis at the “point of care” is the future and can prevent disability and death at scale — especially since not all the ambulance visits are emergencies.


Google proposes hybrid approach to AI transfer learning for medical imaging


In transfer learning, a machine learning algorithm is trained in two stages. First, there’s retraining, where the algorithm is generally trained on a benchmark data set representing a diversity of categories. Next comes fine-tuning, where it is further trained on the specific target task of interest. The pretraining step helps the model to learn general features that can be reused on the target task, boosting its accuracy. According to the team, transfer learning isn’t quite the end-all, be-all of AI training techniques. In a performance evaluation that compared a range of model architectures trained to diagnose diabetic retinopathy and five different diseases from chest x-rays, a portion of which were pretrained on an open source image data set, they report that transfer learning didn’t “significantly” affect performance on medical imaging tasks. Moreover, a family of simple, lightweight models performed at a level comparable to the standard architectures. In a second test, the team studied the degree to which transfer learning affected the kinds of features and representations learned by the AI models. They analyzed and compared the hidden representations in the different models trained to solve medical imaging tasks, computing similarity scores for some of the representations between models trained from scratch and those pretrained on ImageNet.


Robotic exoskeletons: Coming to a factory, warehouse or army near you, soon 

ford-exoskeleton1.jpg
Ford is thought to be one of the bigger users of exoskeletons to date, but other car makers are deploying exoskeletons, although several have opted for build-your-own rather than off the shelf systems. Hyundai debuted its own exoskeleton vest, the VEX, earlier this year. The back-worn exoskeleton "is targeted at production-line workers whose job is primarily overhead, such as those bolting the underside of vehicles, fitting brake tubes, and attaching exhausts", Hyundai said, and is expected to be rolled out at Hyundai plants. GM meanwhile has teamed up with NASA to create a robotic glove that can help increase the amount of force a wearer can exert when gripping an object or lifting up a piece of equipment for long periods, cutting the likelihood of strain or injury. Closer to home, the construction industry is also shaping up to be another significant user of exoskeletons. Builder Wilmott Dixon, for example, started piloting the ExoVest at a Cardiff site last year. One factor driving the rollout of exoskeletons in both the construction and auto industries is the possibility of cutting worker injuries as well as enabling skilled staff to work for longer.


What does it mean to think like a data scientist?

Art is a very important part of that, because what we find in a lot of our data science engagements is there's a lot of exploration of what might be possible, the realm of what's possible. So, we tried to empower the power of ‘might,’ right? That might be a good idea, that might be something, because if you don't have enough might ideas, you never have anything, any breakthrough ideas. And so, this art of thinking like a data scientist, this kind of says, 'Yeah, there's a data science process.' But think about it as guardrails, not railroad tracks. And we're going to bounce in between these things. And oh, by the way, it's really important that your business stakeholders, your subject matter experts, also understand how to think like a data scientist in this kind of non-linear creative kind of fashion, so you come up with better ideas. Because we're all in search of variables and metrics that might be better predictors of performance, right? And the data science team will have some ideas from their past experience. 



Teams are struggling to implement these new tools and 71 percent said that they are adding security technologies faster than they are adding the capacity to proactively use them. This added complexity is also compromising their threat response with 69 percent of security decision makers surveyed saying that their security team currently spends more time managing security tools than effectively defending against threats. To make matters worse, a majority of enterprises are less secure today as a result of security tool sprawl and over half (53%) say their security team has reached a tipping point where the excessive number of security tools in place adversely impacts their organization's security posture. ReliaQuest's CEO, Brian Murphy provided further insight on the report's findings, saying: "Cyber threats continue to rise and require companies to mitigate risk. While it's tempting to think another piece of technology will solve the problem, it's far from true -- in fact, this survey proves more tools can worsen enterprise security by adding complexity without improving outcomes.


There’s No Opting Out of the California Consumer Privacy Act

For starters, GDPR applies to all European data but is a minimum requirement. Individual countries in the EU have their own laws that are often more restrictive. Alternatively, CCPA is applicable to California data only and excludes any data that is already covered by a federal law, such as HIPAA or GLBA. While GDPR protects personal information (PI) that could potentially identify a specific individual -- including name, address, telephone number and Social Security number (SSN) -- CCPA goes beyond to include product purchase history, social media activity, IP addresses, and household information. Under CCPA, companies are required to include a single, clear and conspicuous "Do Not Sell My Personal Information" link on homepages. Alternatively, GDPR offers various opt-out rights, each of which requires individual action.  Under GDPR, administrative fines can reach 20 million euros or 4% of annual global revenue, whichever is greatest. For CCPA, the California Attorney General can fine companies $2,500 per violation or up to $7,500 for each intentional violation.


Google Chrome can now warn you in real time if you're getting phished


Between July and September, Google sent more than 12,000 warnings about state-sponsored phishing attacks targeting its users in the US. According to Verizon's annual cybersecurity report, phishing is the leading cause of data breaches, and Google said in August that it blocked about 100 million phishing emails every day. But phishing links don't just come in emails: They can also appear in malicious advertisements, or through direct messages on chat apps. For those of you using a Chrome browser, Google is launching an extra level of protection against phishing through real-time checks on site visits. You can turn it on by enabling "Make searches and browsing better" in your Chrome settings. This protection was already available for Chrome's Safe Browsing mode, which checked the URL of every website visited and made sure it was not on Google's block list. The block list is saved on devices and only synced every 30 minutes, allowing savvy hackers to bypass the filter by creating a new phishing URL before the list updates.


Big Changes Are Coming to Security Analytics & Operations

Nearly two-thirds (63%) of survey respondents claim that security analytics and operations are more difficult today than they were two years ago. This increasing difficulty is being driven by external changes and internal challenges. From an external perspective, 41% of security pros say that security analytics and operations are more difficult now due to rapid evolution in the threat landscape, and 30% claim that things are more difficult because of the growing attack surface. Security teams have no choice but to keep up with these dynamic external trends. On the internal side, 35% of respondents report that security analytics and operations are more difficult today because they collect more security data than they did two years ago, 34% say that the volume of security alerts has increased over the past two years, and 29% complain that it is difficult to keep up with the volume and complexity of security operations tasks. Security analytics/operations progress depends upon addressing all these external and internal issues.



Quote for the day:


"Growth is painful. Change is painful. But nothing is as painful as staying stuck somewhere you don't belong." -- Mandy Hale


Daily Tech Digest - December 10, 2019

Internet of the Senses is on the horizon, thanks to AR and VR


While smell cannot be conveyed digitally, that will change, with smell becoming an online experience by 2030, the report found. More than half (56%) of respondents said technology would evolve to the point that they would be able to smell scents in films. This same application will be applied to sales as retailers market products commercially with smell, the report found, meaning perfume commercials could emit a scent. Along the same lines as smell, humans will also be able to experience taste through devices, according to the report. Nearly 45% of respondents believe that in the next 10 years, a device could exist that digitally enhances the food someone eats. This advancement could have significant impacts on health and diet, allowing people to eat healthier foods that taste more savory than they are. This application presents another opportunity for marketing retail, as consumers could taste food products. People viewing cooking programs could even taste the food that is on screen, the report found. More than half (63%) of respondents said smartphone users would be able to feel the shape and texture of digital icons.



4 Authentication Use Cases: Which Protocol To Use?

silver platter passwords exposed authentication hacked vulnerable security breach
Where strong security is a requirement, SAML is generally a good choice. All aspects of the exchange between the RP and IdP can be digitally signed and verified by both parties. This provides high assurance that each party is communicating with the correct counterpart and not an imposter. In addition, the assertion from the IdP may be encrypted, so that HTTPS is not the only protection against attackers accessing users’ data. To add further security, signing and encryption keys may be rotated regularly. To take OIDC to the same level of security requires extra cryptographic keys, as in Open Banking extensions, and this can be relatively onerous to set up and maintain. However, OIDC benefits from the use of JSON and the simpler use by mobile apps, compared to SAML. ... Here, the preference will be for OIDC, as it is likely that a variety of devices, some not browser-based, might be involved, which normally rules out SAML. The built-in consent associated with OIDC enhances the privacy aspects of the data sharing. In addition, the use of signing and encryption may be used to strengthen the security aspects to a degree that adequately meets the requirements of handling such data.



Predictions for AI and ML in 2020

Predictions for AI and ML in 2020 image
The digital skills gap present within workforces has meant that employees are unsure about how to unleash AI’s full potential. But according to SnapLogic CTO, Craig Stewart, this problem could take a step towards being solved next year. “Transparency remains a hot topic and will continue into 2020 as companies aim to ensure transparency, visibility, and trust of AI and AI-assisted decisions,” said Stewart. “We’ll see further development and expansion of the ‘explainable AI movement,’ and efforts like it. ... Even though there are aforementioned worries regarding AI and ML possibly replacing human workers, some experts in digital innovation believe that the gradual inclusion of the technology will end up being a much more collaborative process. “Despite fears that it will replace human employees, in 2020 AI and machine learning will increasingly be used to aid and augment them,” said Felix Gerdes, Insight UK‘s director of digital innovation services. “For instance, customer service workers need to be certain they are giving customers the right advice.


The Future of Spring Cloud's Hystrix Project


Spring Cloud Hystrix Project was built as a wrapper on top of the Netflix Hystrix library. Since then, It has been adopted by many enterprises and developers to implement the Circuit Breaker pattern. In November 2018 when Netflix announced that they are putting this project into maintenance mode, it prompted Spring Cloud to announce the same. Since then, no further enhancements are happening in this Netflix library. In SpringOne 2019, Spring announced that Hystrix Dashboard will be removed from Spring Cloud 3.1 version which makes it officially dead. As the Circuit Breaker pattern has been advertised so heavily, many developers have either used it or want to use it, and now need a replacement. Resilience4j has been introduced to fulfill this gap and provide a migration path for Hystrix users. Resilience4j has been inspired by Netflix Hystrix but is designed for Java 8 and functional programming. It is lightweight compared to Hystrix as it has the Vavr library as its only dependency. Netflix Hystrix, by contrast, has a dependency on Archaius which has several other external library dependencies such as Guava and Apache Commons.


Dubai’s Kentech kicks off digital transformation drive


“Kentech has suffered with poor IT adoption partnerships in the past, so we needed something that was world-class. We wanted something that our business would love and use.” Kentech launched its tendering process early this year and by July it had selected Oracle as its cloud partner. “During the tendering process, we found that our business was closely aligned to construction,” said O’Gara. “Some of our requirements were quite complex, especially when dealing with reimbursable and fixed-price work – they can chop and change on a daily basis. We found that Oracle could meet those complex requirements. “For us, it was the ERP and budgeting models that were the differentiator. We’ve now started implementation and we’re going to go live at the end of this year with the first phase. We’re a project-based business, so we need to be able to scale up and down very quickly. The cloud model suits us perfectly as a business because we can be flexible, rather than going all out and saying ‘I need 10 more servers’.”


Hybrid multi-cloud a must for banks

Banks operating under a hybrid multi-cloud model predictably and optimally manage finances as cost models shift from fixed to variable. Storing data on site with traditional facilities is expensive and holds banks in long-term contracts for a set amount of data storage. Banks over-resource infrastructure and storage leading to payment of unnecessary resources. Hybrid cloud models allow banks to scale as needed, purchasing only what is immediately utilised using a subscription-based model offered by most CSPs. Procurement and implementation in the traditional way is slow and thus capacity management and a degree of guessing are used, resulting in over-capitalised systems offering little ROI. As the cloud allows for scaling on a pay-as-you-go model, the spend is greatly optimised. For example, UBS’s risk management platform is powered by Microsoft Azure, saving the financial service company 40 percent on infrastructure costs, increasing calculation times by 100 percent, and gaining near infinite scale.


The 10 Best Examples Of How Companies Use Artificial Intelligence In Practice

The 10 Best Examples Of How Companies Use Artificial Intelligence In Practice
Today, Waymo wants to bring self-driving technology to the world to not only to move people around, but to reduce the number of crashes. Its autonomous vehicles are currently shuttling riders around California in self-driving taxis. Right now, the company can’t charge a fare and a human driver still sits behind the wheel during the pilot program. Google signaled its commitment to deep learning when it acquired DeepMind. Not only did the system learn how to play 49 different Atari games, the AlphaGo program was the first to beat a professional player at the game of Go. Another AI innovation from Google is Google Duplex. Using natural language processing, an AI voice interface can make phone calls and schedule appointments on your behalf. ... Another innovative way Amazon uses artificial intelligence is to ship things to you before you even think about buying it. They collect a lot of data about each person’s buying habits and have such confidence in how the data they collect helps them recommend items to its customers and now predict what they need even before they need it by using predictive analytics.


Verizon kills email accounts of archivists trying to save Yahoo Groups history

According to the Archive Team: "As of 2019-10-16 the directory lists 5,619,351 groups. 2,752,112 of them have been discovered. 1,483,853 (54%) have public message archives with an estimated number of 2.1 billion messages (1,389 messages per group on average so far). 1.8 billion messages (86%) have been archived as of 2018-10-28." Verizon has issued a statement to the group supporting the Archive Team, telling concerned archivists that "the resources needed to maintain historical content from Yahoo Groups pages is cost-prohibitive, as they're largely unused". The telecoms giant also said the people booted from the service had violated its terms of service and suggested the number of users affected was small. "Regarding the 128 people who joined Yahoo Groups with the goal to archive them – are those people from Archiveteam.org? If so, their actions violated our Terms of Service. Because of this violation, we are unable reauthorize them," Verizon said. 



Open source refers to an online project that is publicly accessible for anyone to modify and share, as long as they provide attribution to the original developer, reported TechRepublic contributor Jack Wallen in What is open source?. Since its release over 20 years ago, open source has changed the internet. Without open source, the online experience would be "a far different place; much more limited, expensive, less robust, less feature-driven and less scalable. Big name companies would be much less powerful and successful as well in the absence of open source software," wrote Scott Matteson in How to decide if open source or proprietary software solutions are best for your business. ... Major tech companies have set their sights on open source development, with Microsoft's acquisition of GitHub and IBM's acquisition of Red Hat. However, developers are concerned about the impact these tech giants could have on the open source community, the report found.  Nearly 41% of respondents said they were concerned about the level of involvement from major tech players in open source. The main concerns they cited involved possible self-serving intentions from big companies, the use of restrictive licenses that give large organizations unfair competitive advantage, and overall trust of large corporations, the report found.


Is cloud migration iterative or waterfall?

Is cloud migration iterative or waterfall?
Cloud migration projects have two dimensions. First, they are short-term sprints where a project team migrates a handful of application workloads and data stores to a single or multicloud. They act independently, with little architectural oversite or governance, and last between two to six months. Second, is the longer-term architecture including security, governance, management, and monitoring. This may be directed by a cloud business office, the office of the CTO, or a master cloud architect. This set of processes goes on continuously. Here is the problem. The former seems to overshadow the latter, meaning that we’re moving to the cloud using ad hoc and decoupled sprints, all with little regard for common security and governance layers and any sort of management and monitoring. The result is something we’ve talked about here before: complexity. Although we built something that seems to work, applications migrated from one platform to another are deployed with different technology stacks.



Quote for the day:


"Without growth, organizations struggle to add talented people. Without talented people, organizations struggle to grow." -- Ray Attiyah


Daily Tech Digest - December 09, 2019

The PC was supposed to die a decade ago. Instead, this happened


Not all that long ago, tech pundits were convinced that by 2020 the personal computer as we know it would be extinct. You can even mark the date and time of the PC's death: January 27, 2010, at 10:00 A.M. Pacific Time, when Steve Jobs stepped onto a San Francisco stage to unveil the iPad. The precise moment was documented by noted Big Thinker Nicholas Carr in The New Republic with this memorable headline: "The PC Officially Died Today." ... And so, here we are, a full decade after the PC's untimely death, and the industry is still selling more than a quarter-billion-with-a-B personal computers every year. Which is pretty good for an industry that has been living on borrowed time for ten years. Maybe the reason the PC industry hasn't suffered a mass extinction event yet is because they adapted, and because those competing platforms weren't able to take over every PC-centric task. So what's different as we approach 2020? To get a proper before-and-after picture, I climbed into the Wayback Machine and traveled back to 2010.


Netflix open sources data science management tool

Netflix open sources data science management tool
Netflix has open sourced Metaflow, an internally developed tool for building and managing Python-based data science projects. Metaflow addresses the entire data science workflow, from prototype to model deployment, and provides built-in integrations to AWS cloud services.  Machine learning and data science projects need mechanisms to track the development of the code, data, and models. Doing all of that manually is error-prone, and tools for source code management, like Git, aren’t well-suited to all of these tasks. Metaflow provides Python APIs to the entire stack of technologies in a data science workflow, from access to the data through compute resources, versioning, model training, scheduling, and model deployment. ... Metaflow does not favor any particular machine learning framework or data science library. Metaflow projects are just Python code, with each step of a project’s data flow represented by common Python programming idioms. Each time a Metaflow project runs, the data it generates is given a unique ID. This lets you access every run—and every step of that run—by referring to its ID or user-assigned metadata.



AppSec in the Age of DevSecOps

laptop-in-dark
Application security as a practice is dynamic. No two applications are the same, even if they belong in the same market domain, presumably operating on identical business use-cases. Some (of the many) factors that cause this variance include technology stack of choice, programming style of developers, a culture of the product engineering team, priority of the business, platforms used, etc. This consequentially results in a wide spectrum of unique customer needs. Take penetration testing as an example. This is a practice area that is presumably well-entrenched, both as a need and as an offering in the application security market. However, in today's age, even a singular requirement such as this could make or break an initial conversation. While, for one prospect, the need could be to conduct the test from a compliance (only) perspective, another's need could stem from a proactive software security initiative. There are many others who have internal assessment teams and often look outside for a third-party view.


Data centers in 2020: Automation, cheaper memory

prediction predict the future crystal ball hand holding crystal ball by arthur ogleznev via unsplash
Storage-class memory is memory that goes in a DRAM slot and can function like DRAM but can also function like an SSD. It has near-DRAM-like speed but has storage capabilities, too, effectively turning it into a cache for SSD. Intel and Micron were working on SCM together but parted company. Intel released its SCM product, Optane, in May, and Micron came to market in October with QuantX. South Korean memory giant SK Hynix is also working on a SCM product that’s different from the 3D XPoint technology Micron and Intel use as well. ... Remember when everyone was looking forward to shutting down their data centers entirely and moving to the cloud? So much for that idea. IDC’s latest CloudPulse survey suggests that 85% of enterprises plan to move workload from public to private environments over the next year. And a recent survey by Nutanix found 73% of respondents reported that they are moving some applications off the public cloud and back on-prem. Security was cited as the primary reason. And since it’s doubtful security will ever be good enough for some companies and some data, it seems the mad rush to the cloud will likely slow a little as people become more picky about what they put in the cloud and what they keep behind their firewall.


Batch Goes Out the Window: The Dawn of Data Orchestration

Data.orchestration
Add to the mix the whole world of streaming data. By open-sourcing Kafka to the Apache Foundation, LinkedIn let loose the gushing waters of data streams. These high-speed freeways of data largely circumvent traditional data management tooling, which can't stand the pressure. Doing the math, we see a vastly different scenario for today's data, as compared to only a few years ago. Companies have gone from relying on five to 10 source systems for an enterprise data warehouse to now embracing dozens or more systems across various analytical platforms. Meanwhile, the appetite for insights is greater than ever, as is the desire to dynamically link analytical systems with operational ones. The end result is a tremendous amount of energy focused on the need for ... meaningful data orchestration. For performance, governance, quality and a vast array of business needs, data orchestration is taking shape right now out of sheer necessity. The old highways for data have become too clogged and cannot support the necessary traffic. A whole new system is required. To wit, there are several software companies focused intently on solving this big problem. Here are just a few of the innovative firms that are shaping the data orchestration space.


jobs.jpg
With the majority of companies looking for expertise in the three- to 10-year range, Robinson said they must change their traditional recruitment/training tactics. "The technical skill supply is far less than the demand, so companies are not going to simply be able to meet their exact needs on the open market,'' he said. "There must be a willingness to look outside the normal sources for technical skill, and there must be a willingness to invest in training to get workers up to speed once they are in house." The trend is toward specialization, "but this certainly introduces a financial challenge,'' he said, since most companies cannot afford to build large teams of specialists. So depending on the company's strategy, they may lean more on generalists or they may explore different mixes of internal/external talent. "Even for tech workers who specialize, knowledge across the different areas of IT is necessary for efficient operation of complex systems,'' Robinson said. The primary approach most tech workers are taking for career growth is to deepen their skills in their area of expertise, he said. But they must have knowledge in other areas beyond this, Robinson stressed, especially as tech workers move from a junior level to an architect level.


Seagate doubles HDD performance with multi-actuator technology

big data / data center / server racks / storage / binary code / analytics
The technology is pretty straightforward. Say you have four platters in a disk drive. The actuator controls the drive heads and moves them all in unison over all four platters. Seagate's multi-actuator makes two independent actuators out of one, so in a six-platter drive, the two actuators cover three platters each. ... While SSDs have buried HDDs in terms of performance, they simply can’t match HDDs for capacity. Of course there are multi-terabyte SSDs available, but they cost many times more than the 12TB/14TB HDD drives that Seagate and its chief competitor Western Digital offer. And data centers are not about to go all-SSD yet, if ever. So there's definitely a place for faster HDDs in the data center. Microsoft has been testing Exos 2X14 enterprise hard drives with MACH.2 technology to see if it can maintain the IOPS required for some of Microsoft’s cloud services, including Azure and the Microsoft Exchange Online email service, while increasing available storage capacity per data-center slot.


Synchronizing Cache with the Database using NCache

Caching improves the performance of web applications by reducing resource consumption in applications. It achieves this by storing page output or relatively stale application data across the HTTP requests. Caching makes your site run faster and provide better end-user experience. You can take advantage of caching to reduce the consumption of server resources by reducing the server and database hits. The cache object in ASP.NET can be used to store application data and reduce the expensive server (database server, etc.) hits. As a result, your web page is rendered faster. When you are caching application data, you would typically have a copy of the data in the cache that also resides in the database. Now this duplication of data (both in the database and in the cache) introduces data consistency issues. The data in the cache must be in sync with the data in the database. You should know how data in the cache can be invalidated and removed when any change occurs in the database in real-time.


Coders are the new superheroes of natural disasters

screen-shot-2019-12-02-at-5-33-21-pm.png
It will be a launching point for open-source programs like Call for Code and "Clinton Global Initiative University" and will support the entire process of creating solutions for those most in need. Call for Code is seeking solutions for this year's challenge and coders can go to the 2019 Challenge Experience to join. Call for Code unites developers and data scientists around the world to create sustainable, scalable, and live-saving open source technologies via the power of Cloud, AI, blockchain and IoT tech. Clinton Global Initiative University partners with IBM and commits to inspiring university students to harness modern, emerging and open-source technologies to develop solutions for disaster response and resilience challenges. "Technology skills are increasingly valuable," Krook said, "even for students who don't intend to become professional software developers. For computer science students, putting the end user first, and empathizing with how they hope to use technology to solve their problems—particularly those that represent a danger to their health and well-being—will help them understand how to build high-quality and well-designed software."



There is a widespread belief that rules, structure and processes inhibit freedom and that organizations that want to build a culture of autonomy and performance need to avoid them like the plague. ... There are times in history when this has happened to entire societies. When the leaders of the French Revolution abolished the laws of the "Ancien Regime", the result was terror. When Russia descended into chaos after the revolution of 1917, the result was civil war and the emergence of a tyrant, Stalin, who began a sustained terror of his own. When the Weimar Republic in Germany failed in the 1920’s, the result was Hitler. In our own time, as social structures weaken, strongmen like Putin or Erdogan come to power and impose personal rules of their own. Societies which abolish laws become chaotic. In chaos, there is absolute freedom. As the philosopher Hegel observed, absolute freedom is not freedom at all, but a playground for the arbitrary exercise of power, which ends in terror. In terror, only a few are free, and many are slaves.



Quote for the day:


"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche


Daily Tech Digest - December 08, 2019

Machine learning use cases

A brain with two segments, one like a neural network drawing
Machine learning has many potential uses, including external (client-facing) applications like customer service, product recommendation, and pricing forecasts, but it is also being used internally to help speed up processes or improve products that were previously manual and time-consuming. You’ll notice these two types throughout our list of machine learning use cases below. ... This consumer-based use for machine learning applies mostly to smart phones and smart home devices. The voice assistants on these devices use machine learning to understand what you say and craft a response. The machine learning models behind voice assistants were trained on human languages and variations in the human voice, because it has to translate what it hears into words and then make an intelligent, on-topic response. ... This machine–based pricing strategy is most known in the travel industry. Flights, hotels, and other travel bookings usually have a dynamic pricing strategy behind them. Consumers know that the sooner they book their trip the better, but they may not realize that the actual price changes are made via machine learning.



Agile vs DevOps Infographic: Everything You Need To Know

DevOps Methodology, Lifecycle and Best Practices explained
Agile methodology has been widely used by enterprises all across the globe. Software development teams have been using Agile for over a decade now because it provides efficient methods and techniques to build software. Agile methodology is centered around the idea of continuous iteration of development and testing in the software development lifecycle (SDLC). It focuses on iterative, incremental, and evolutionary software development. Agile methodology enables cross-functional teams to collaborate together to deliver value faster, with greater flexibility, quality, and predictability.  ... DevOps is a way of deploying applications to production. It is a deployment model that emphasizes integration, communication, and collaboration among the development and operations teams to enable rapid deployments of software. DevOps focuses on allowing teams to deploy code faster to the production environment, using automated tools and processes. Automation is a critical element of DevOps that improves organizations to deliver applications and services rapidly.


Machine Learning as a Service (MLaaS) is the Next Trend No One is Talking About


Cloud service providers have the ability to ease the application of machine learning into everyday business use. Amazon, Google, and Microsoft all have preliminary services that enable machine learning functions. Speech recognition, sentiment analysis, chatbot enhancement, image and video analysis, and classification and regression services are some of the assistive solutions that are currently provided (“Comparing Machine Learning as a Service: Amazon, Microsoft Azure, Google Cloud AI, IBM Watson.”). As the application of machine learning becomes more valuable, the tech giants will continue to invest in building on top of their machine learning as a service (MLaaS) offerings. By utilizing these services from a cloud provider, companies can expect to save time, money, and resources that would have been invested into creating their own in house solutions. By choosing to use MLaaS, companies can be quicker to market and engage the latest developments in the space, without taking on an extraordinary amount of risk.


4 Ways to Successfully Scale AI and Machine Learning for Businesses

Although it’s apparent that there is a shortage in data science talent on the job market, and hiring for this type of role can be challenging, AI and ML success requires much more than the skills of a data scientist. I’m talking about model building, data prep, training, and interference. If you’re serious about scaling and reaping the benefits that AI and ML have to offer, you should be looking to work with ML architects, data engineers, and operations managers. This piece goes into much more detail about how to structure your data science team. The next challenge is to organize and scale your team effectively. Do you have staff trained with the necessary skills in-house to move this project from concept to completion? Do you build these skills through retraining and hiring? Or will you contract a team to help in completing this project in a pre-determined amount of time? Building up your current team’s skillsets will help you to scale on a long-term basis. Whereas, third-party contractors will help to get your project off the ground with speed and efficiency.


security-camera
A WebSocket is a bi-directional computing communication channel between a client and server, which is great when the communication is low-latency and high-frequency. Websockets are mainly used in joint, event-driven, or live apps, where the speed of the conventional client-server request-response model doesn’t meet the credentials. Examples include team dashboards and stock trading applications. ... STOMP (Simple Text Oriented Messaging Protocol) was born as an alternative to existing open messaging protocols, like AMQP, to enterprise message brokers from scripting languages like Ruby, Python, and Perl with a subset of common message operations. ... Happily, for Java developers, Spring supports the WebSocket API, which implements raw WebSockets, WebSocket emulation through SocksJS (when WebSockets are not supported), and publish-subscribe messaging through STOMP. In this tutorial, you will learn how to use the WebSockets API and configure a Spring Boot message broker. Then we will authenticate a JavaScript STOMP client during the WebSocket handshake and implement Okta as an authentication and access token service.


Processing Geospatial Data at Scale With Databricks

The first challenge involves dealing with scale in streaming and batch applications. The sheer proliferation of geospatial data and the SLAs required by applications overwhelms traditional storage and processing systems. Customer data has been spilling out of existing vertically scaled geo databases into data lakes for many years now due to pressures such as data volume, velocity, storage cost, and strict schema-on-write enforcement. While enterprises have invested in geospatial data, few have the proper technology architecture to prepare these large, complex datasets for downstream analytics. Further, given that scaled data is often required for advanced use cases, the majority of AI-driven initiatives are failing to make it from pilot to production. ... Databricks offers a unified data analytics platform for big data analytics and machine learning used by thousands of customers worldwide. It is powered by Apache Spark™, Delta Lake, and MLflow with a wide ecosystem of third-party and available library integrations. Databricks UDAP delivers enterprise-grade security, support, reliability, and performance at scale for production workloads.


QR code scams rise in China, putting e-payment security in spotlight

QR codes, originally invented by Japan’s car parts maker Denso, have become so ubiquitous in China that even street hawkers now use them for electrical payments, as seen here in Beijing. Photo: Weibo
The suspect had replaced legitimate codes created by merchants with fake ones embedded with a virus programmed to steal the personal information of consumers. Scammers have also been profiting handsomely from the mainland’s multibillion dollar bike-sharing industry. By replacing the original QR code used to unlock the bicycle with a fake one, they have been able to cheat users into transferring their money into their own bank accounts. The proliferation of this type of crime has been made possible by the explosion of mobile payments in China, as the concept of a cashless society moves ever closer to becoming a reality. Nowhere is this shift more evident than in the abundance of QR codes – a type of barcode, or machine-readable image – that allow consumers to make small payments by simply scanning the image and confirming the transaction. QR codes were invented in 1994 by Denso Wave, a unit of Japan’s largest automotive parts maker, to allow for quick scanning when tracking vehicles during the assembly process. From the car factory, the codes later spread to broader usage, encompassing everything from consumer purchases to social media.


The hidden risks of cryptojacking attacks

The rise of cryptojacking has followed the same upward trajectory as the value of cryptocurrency. Suddenly, digital “cash” is worth actual money and hackers, who usually have to take several steps to generate income from stolen data, have a direct path to cashing in on their exploits. But if all the malware does is sit quietly in the background generating cryptocurrency, is it really much of a danger? In short, yes – for two reasons. In fundamental terms, cryptojacking attacks are about stealing… in this case energy and system resources. The energy might be minimal (more about that in a moment) but using resources slows the performance of the overall system and actually increases wear and tear on the hardware, reducing its lifespan, resulting in frustration, inefficiency and increased costs. Much more importantly however, a cryptojacking-compromised system is a flashing warning sign that a vulnerability exists. Often, infiltrating a system to cryptojack involves opening access points that can be easily leveraged to steal other types of data.


Developing Deep Learning Models for Chest X-rays with Adjudicated Image Labels


For very large datasets consisting of hundreds of thousands of images, such as those needed to train highly accurate deep learning models, it is impractical to manually assign image labels. As such, we developed a separate, text-based deep learning model to extract image labels using the de-identified radiology reports associated with each X-ray. This NLP model was then applied to provide labels for over 560,000 images from the Apollo Hospitals dataset used for training the computer vision models. To reduce noise from any errors introduced by the text-based label extraction and also to provide the relevant labels for a substantial number of the ChestX-ray14 images, approximately 37,000 images across the two datasets were visually reviewed by radiologists. These were separate from the NLP-based labels and helped to ensure high quality labels across such a large, diverse set of training images.


Injection Vulnerabilities – 20 Years and Counting

Lack of awareness: it’s common to see junior software engineers writing code that is vulnerable. The “injection” concept may be not very intuitive for them, and they deliver vulnerable code because it’s the easiest / fastest way for them to implement a specific component. Rush: we all know how stressful and demanding modern software development environments can be. Concepts like Agile and CI/CD are great for fast delivery, but when developers are focused only on delivering the code, they might forget to check for security issues. Complexity: APIs and modern apps are complex. A modern app, like Uber for example, might look very simple from the UX (user experience) perspective, but on the backend there are many databases and microservices that communicate between them behind the scenes. In many cases, it’s hard to track which inputs come from the client itself and require more security attention (as filtering and scanning), and which inputs are internal to the system.



Quote for the day:


"Increasingly, management's role is not to organize work, but to direct passion and purpose." -- Greg Satell


Daily Tech Digest - December 07, 2019

Why a computer will never be truly conscious


Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work. The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier. Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons.


cloud server rack
One of the six customers impacted by the ransomware infection is FIA Tech, a financial and brokerage firm. Teh ransomware caused on outage of FIA Tech cloud services. In a message to customers, FIA Tech said "the attack was focused on disrupting operations in an attempt to obtain a ransom from our data center provider." FIA Tech did not name the data center provider, but a quick search identifies it as CyrusOne. We've been told by a source close to CyrusOne that the data center provider does not intend to pay the ransom demand, barring any future unforeseen developments. The company owns 45 data centers in Europe, Asia, and the Americas, and has more than 1,000 customers. It is also considering a sale after receiving takeover interest over the summer, according to Bloomberg. CyrusOne is a publicly-traded, NASDAQ-listed company. In an SEC filing last year, the company explicitly listed "ransomware" as a risk factor for its business.


Costs are likely to continue to improve as, among other things, companies reduce the level of pricey cobalt in battery components and achieve manufacturing improvements as production volumes rise. But metals mining is already a mature process, so further declines there are likely to slow rapidly after 2025 as the cost of materials makes up a larger and larger portion of the total cost, the report finds. Deeper cost declines beyond 2030 are likely to require shifts from the dominant lithium-ion chemistry today to entirely different technologies, like lithium-metal, solid-state and lithium-sulfur batteries. Each of these are still in much earlier development stages, so it’s questionable whether any will be able to displace lithium-ion by 2030, Field says. Gene Berdichevsky, chief executive of anode materials maker Sila Nanotechnologies, agrees it will be hard for the industry to consistently break through the $100/kWh floor with current technology. But he also thinks the paper discounts some of the nearer-term improvements we’ll see in lithium-ion batteries without full-fledged shifts to different chemistries.


Banking as a Platform- The Future is Now!

No matter what type or size a platform offering may be, some of the following will be a must. Embedded analytics which runs like an undercurrent and omnipresent will become a hygiene requirement to have and will also play a major role in revenue generation and profitability from the platform. AI and ML will be key differentiators in enhancing user experience and operational efficiency too leading to monetary benefits. BaaP’s DNA will be defined by how well is the API strategy of the bank and a complete agility in usage of APIs will be the new norm from the business teams. Scaling, multiple usage, Data privacy and cyber security compounded with regulatory guidelines will be quite crucial for the smooth and safe functioning of BaaP and these two aspects will be central to any decision making by banks. It may sound little too audacious to talk about future of platform banking which is still in nascent stage. But history always serves a great recipe to predict future (we are taking the data analytics route!).


AI Policies Are Setting Stage To Transform Healthcare But More Is Needed

AI
New data standards proposals by Health and Human Services will empower patients and lead to better, faster diagnosis. The proposal would require electronic medical record (EMR) companies to provide portals called APIs for patients to access and share their health data. Currently it is ridiculously complex and expensive for patients to get copies of their own records. Shockingly, it may cost over $500 to get your medical record. Accessibility and data sharing are critical for better, faster diagnosis and treatments. AI has been used to predict heart attacks five years into the future. It is also able to predict who is at the greatest risk for suffering from depression. The new standards would put the patient in control of who uses their data and for what purposes. Entrepreneurial companies are leveraging venture dollars to build the best AI capabilities in the world, but they need access to health data to prove their benefits to patients. Patients should be able to choose how their data is used.


Why A Human Firewall Is The Biggest Defence Against Data Breach

Hackers are targeting servers that haven’t been set up correctly, giving them access to sensitive data with minimal effort. Cloud-based systems such as Office365 don’t have multi-factor authorisation, or web-based systems that are not patched result in vulnerabilities that can be exploited. Also, sometimes hardware such as firewalls can be configured incorrectly, or poor security settings on individual devices, can lead to loopholes that can be exploited. ... A hacker only needs to gain access to one user’s account, to then gain control and access the compromised network and data. An approach known as the “known good” model works in a way where anomalies that stray from the established normal baseline are identified and highlighted as a potential threat and cyber-attack. Business leaders are widely criticised and held accountable for failing to protect their consumer’s data especially in the light of the vast IT and training budgets that are at their disposal, yet it is the daily performance of front-line staff that reveal the true strengths and weaknesses within any organisation.


Two Russians indicted over Dridex and Zeus malware


“Sitting quietly at computer terminals far away, these cyber criminals allegedly stole tens of millions of dollars from unwitting members of our business, non-profit, governmental, and religious communities. “Each and every one of these computer intrusions was, effectively, a cyber-enabled bank robbery. We take such crimes extremely seriously and will do everything in our power to hold these criminals to justice.” The losses incurred through the activities of Yakubets’ group – known as Evil Corp – totalled hundreds of millions of pounds in both the UK, the US, and other countries. Additional investigations in the UK targeted a network of money launderers who funnelled profits back to Evil Corp, for which eight people have already gone to prison. Other intelligence supplied through UK law enforcement has helped support sanctions brought against the group by the US Treasury’s Office of Foreign Asset Control. The NCA described the operation as a sophisticated and technically skilled one, which represented one of the most significant cyber crime threats ever faced in the UK.


Your Privacy Could Be at Risk Without These Updates to Behavioral Biometrics


Mastercard is one of the major brands investing in passive biometrics. The goal is to determine the probability that the authenticated user is present during the respective interactions. The credit card provider’s system evaluates more than 300 signals to make a conclusion. They include how a person navigates around a site on their device or the amount of pressure they put on a touch-sensitive screen. Passive behavioral biometrics measurements also allow catching some strange behaviors that might not immediately become apparent through small samples of data. For example, if a person typically uses the scroll wheel on a mouse to navigate, but then switches to using keyboard commands, that change could indicate someone else has gotten access to a system and is using it fraudulently. Keep in mind that passive and active biometrics both have associated pros and cons. No single method works best in every case. However, the use of passive biometrics to gauge probabilities is relatively new. Since well-known brands like Mastercard are working with it, there’s a good chance this option will become even more prominent.


FBI recommends that you keep your IoT devices on a separate network

google-home-mini-smartphone.jpg
"Your fridge and your laptop should not be on the same network," the FBI's Portland office said in a weekly tech advice column. "Keep your most private, sensitive data on a separate system from your other IoT devices," it added. ... The reasoning behind it is simple. By keeping all the IoT equipment on a separate network, any compromise of a "smart" device will not grant an attacker a direct route to a user's primary devices -- where most of their data is stored. Jumping across the two networks would require considerable effort from the attacker. However, placing primary devices and IoT devices on separate networks might not sound that easy for non-technical users. The simplest way is to use two routers. The smarter way is to use "micro-segmentation," a feature found in the firmware of most WiFi routers, which allows router admins to create virtual networks (VLANs). VLANs will behave as different networks; even they effectively run on the same router. While isolating IoT devices on their own network is the best course of action for both home users and companies alike, this wasn't the FBI's only advice on dealing with IoT devices.


Usability Testing and Testing APIs with Hallway Testing

Hallway testing can be described as using "random" persons or group of people to test software products and interfaces. "Randomness" of a person depends on what we are trying to test. Marchewka suggested trying to engage people who will be using the product (i.e. members of the target group) to get the best understanding of how they will do that. For their hallway testing session they invite a truly random group of people if they are checking mobile app, and a random group of API users if they are verifying UX of an API. Using the specific background and experience of all the people taking part in a particular session of hallway testing, we can uncover all inefficiencies of the user interface in a tested product, said Marchewka. The app or software does not need to have a GUI to benefit from hallway testing; it can be used as part of API prototyping activity, as Marchewka explained. Consumers of API can be asked to use an early version during a hallway testing session; for example, creators can find out if methods are named correctly.



Quote for the day:


"Courage is leaning into the doubts and fears to do what you know is right even when it doesn't feel natural or safe." -- Lee Ellis


Daily Tech Digest - December 05, 2019

The Rise Disappearance And Retirement Of Google Co-Founders

USA - Techonology - Google Introduces T-Mobile G1
It’s a fitting end for two of the most mysterious tech leaders of a generation, who are both exiting their company as it hovers near $1 trillion in market cap. But it’s also a troubling time for Google. The search giant has faced increasing scrutiny from employees, media organizations, activists, regulators, and lawmakers since Page and Brin first stepped back in the summer of 2015. And many of those controversies are problems of Page and Brin’s creation, either because the duo didn’t foresee the ways in which Google could do harm or because they explicitly steered the company in a direction that flouted standard corporate ethics. In that context, it’s important to look back at the big moments in both men’s careers and how the actions they took have had an outsized impact not just on the tech industry, but on the internet and society itself. What Page and Brin have built will likely last for decades to come, and knowing how Google got to where it is today will be an important piece in the puzzle of figuring out where it goes in the future. ... Although Google is now one of the most powerful forces in online advertising on the planet, Page and Brin weren’t too keen on turning their prototype search engine into an ad-selling machine, at first.



Planning for an intelligent future


Compared to current planning activities, which invariably work on pre-defined cycles such as weekly or monthly processes, intelligent planning can be considered to have more of an ‘always-on’ approach. ... As such, any business that has access to data that exceeds the volume that humans can analyse and understand, will need intelligent planning to remain competitive. For example, a large retail organisation can harvest data from millions of daily transactions to make better buying, customer engagement, and operational decisions. But they don’t need to stop at short-term future actions; instead they should consider using social media sentiment and detailed demographics to make longer term, strategic decisions around areas such as range, store locations and customer experience. Financial services is another prime candidate for intelligent planning, particularly where understanding and influencing consumer behaviour is involved, for anything from calculating the probability of a customer renewing their insurance policy; the likelihood of a loan holder defaulting on their payments; or the future spending profile of credit card customers.


How AI is Transforming the Banking Sector


Banks are always under intense pressure from regulatory bodies to enforce the most recent regulations. These regulations are there to protect the banks and customers from fraudulent activities while at the same time, reducing financial crimes like money laundering, tax evasion, and terrorism financing. AI in banking also helps ensure that banks are compliant with the most recent regulations. AI relies on cognitive fraud analytics that watches customer behaviors, track transactions, recognize dubious activities and assess the data of different compliance systems. Businesses can remain up to date with compliance rules and regulations through the use of AI. AI systems can read compliance requirements and detect any changes in the requirements through deep learning and natural language processing. Through this, banks can remain on top of ever-evolving regulatory requirements and align their own regulations with them. Through technologies like analytics, deep learning, and machine learning, banks can remain compliant with regulations.


Augmented reality in retail: Virtual try before you buy

“Nike Fit is a transformative solution and an industry first—using a digital technology to solve for massive customer friction,” Nike writes in its press release for the launch of the app. “In the short term, Nike Fit will improve the way Nike designs, manufactures, and sells shoes—product better tailored to match consumer needs. A more accurate fit can contribute to everything from less shipping and fewer returns to better performance.” ... “The fashion industry has not traditionally been geared toward helping people understand how clothes will actually fit,” the company writes in its press release. “Gap is committed to winning customer trust by consistently presenting and delivering products that make customers look and feel great, and we are using technology to get there.” ... As the technology evolves and gives users more and more accurate renderings of how digital objects look in physical spaces, I expect that more and more brands and industries will hop onto the AR marketing bandwagon. From fashion and accessories to footwear and home décor, and beyond, AR has the potential to transform and completely reimagine customer experiences.


Why the sheer scale of DevOps testing now needs machine learning


The potential value of machine learning is particularly evident in mobile and web app testing because these are very fragmented and complex platforms to handle and understand. What ML can do in this context is to keep all those platforms visible, connected, and in a ready-state mode. In a test lab, ML helps to surface when something is outdated, disconnected from WiFi, or another problem – and moreover, help understand why that has happened. Another way in which ML helps is through showing trends and patterns, helping to not only visualise all that data but provide further insight and make sense of what has happened over the past weeks or months. For instance, it can identify the most problematic functional area in an application, such as the top 5 failing tests over the past 2-3 testing cycle, or which mobile/web platforms have been most error-prone over the past cycles. Was a failure caused by the lab, was it a pop-up, or a security alert? This really matters. Teams invest time, resources and money in automating test activities, but where all this really has an impact and add value is at the reporting stage.


The 10 most important cyberattacks of the decade


Yahoo deserves the first mention because of the sheer size of its breach and the damaging effect it had on the company's ability to compete as an email and search engine platform. In 2013, all three billion of Yahoo's accounts were compromised, making the breach the largest in the history of the internet. It took the company three years to notify the public that everyone's names, email addresses, passwords, birth dates, phone numbers and security answers had been sold on the Dark Web by hackers.  Security experts say the Yahoo breach is notable because of how it was mishandled by the company and the devastating effect it had on Verizon's $4.8 billion acquisition. Yahoo initially discovered that a breach occurred in 2015 exposing 500 million accounts. ... The size of the Equifax breach pales in comparison to the value of the data exposed to hackers. As one of America's largest credit bureaus, the company had the most sensitive data on hundreds of millions of people. Hackers gained access to the information of 143 million Equifax customers, including their names, birth dates, drivers' license numbers, Social Security numbers and addresses. More than 200,000 credit card numbers were released and 182,000 documents with personally identifying information was accessed by cybercriminals.


To stop a tech apocalypse we need ethics and the arts


Matt Reaney, the chief executive and founder of Big Cloud – a recruitment company that specialises in data science, machine learning and AI employment – has argued that technology needs more people with humanities training. [The humanities] give context to the world we operate in day to day. Critical thinking skills, deeper understanding of the world around us, philosophy, ethics, communication, and creativity offer different approaches to problems posed by technology. Reaney proposes a “more blended approach” to higher education, offering degrees that combine the arts and STEM. Another advocate of the interdisciplinary approach is Joseph Aoun, President of Northeastern University in Boston. He has argued that in the age of AI, higher education should be focusing on what he calls “humanics”, equipping graduates with three key literacies: technological literacy, data literacy and human literacy. The time has come to answer the call for humanities graduates capable of crossing over into the world of technology so that our human future can be as bright as possible.


Learning algorithms and the self-supervised machine with Dr. Philip Bachman

Supervised learning is sort of what’s had the most immediate success and what’s driving a lot of the deep learning power technologies that are being used for doing things like speech recognition in phones or doing automated question answering for chat bots and stuff like that. So supervised learning refers to kind of a subset of the techniques that people apply when they have access to a large amount of data and they have a specific type of action that they want a model to perform when it processes that data. And what they do is, they get a person to go and label all the data and say, okay, well this is the input to the model at this point in time. And given this input, this is what the model should output. So you’re putting a lot of constraints on what the model is doing and constructing those constraints manually by having a person looking at a set of a million images and, for each image, they say, oh, this is a cat, this is a dog, this is a person, this is a car.


What a cloud-native approach to RPA could mean to your business


Enterprise’s affinity to cloud computing hasn’t traditionally been reflected by the RPA industry. That is, until now – with the world’s first cloud-native RPA platform, we’re bringing the advantages of cloud-native, intelligent RPA deployments to organisations worldwide. For business users, cloud-native RPA operates as a self-service technology accessed via a web-based graphical interface from anywhere. With a single click or drag-and-drop motion, users can automate those parts of any job that don’t require human creativity, problem-solving capabilities, empathy, or judgment. Just as with popular Software-as-a-Service (SaaS) apps, users can create what they need using an intuitive web interface within the browser. For many common bots, no coding is required. There are no large client downloads to install and manage or commands to memorise; automation and processes are exposed via drag-and-drop functionality and flow charts. Also, because there is no software client, IT doesn’t have to get involved. Infrastructure management costs go away, significantly reducing the total cost of ownership (TCO). 


cyber security abstract wall of locks padlocks
At an even more fundamental level, anyone looking at the state of enterprise security today understands that whatever we’re doing now isn’t working. “The perimeter-based model of security categorically has failed,” says Forrester principal analyst Chase Cunningham. “And not from a lack of effort or a lack of investment, but just because it’s built on a house of cards. If one thing fails, everything becomes a victim. Everyone I talk to believes that.” Cunningham has taken on the zero-trust mantle at Forrester, where analyst Jon Kindervag, now at Palo Alto Networks, developed a zero-trust security framework in 2009. The idea is simple: trust no one. Verify everyone. Enforce strict access-control and identity-management policies that restrict employee access to the resources they need to do their job and nothing more. Garrett Bekker, principal analyst at the 451 Group, says zero trust is not a product or a technology; it’s a different way of thinking about security. “People are still wrapping their heads around what it means. Customers are confused and vendors are inconsistent on what zero trust means. But I believe it has the potential to radically alter the way security is done.”



Quote for the day:


"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." -- John Donahoe