Daily Tech Digest - December 10, 2019

Internet of the Senses is on the horizon, thanks to AR and VR

While smell cannot be conveyed digitally, that will change, with smell becoming an online experience by 2030, the report found. More than half (56%) of respondents said technology would evolve to the point that they would be able to smell scents in films. This same application will be applied to sales as retailers market products commercially with smell, the report found, meaning perfume commercials could emit a scent. Along the same lines as smell, humans will also be able to experience taste through devices, according to the report. Nearly 45% of respondents believe that in the next 10 years, a device could exist that digitally enhances the food someone eats. This advancement could have significant impacts on health and diet, allowing people to eat healthier foods that taste more savory than they are. This application presents another opportunity for marketing retail, as consumers could taste food products. People viewing cooking programs could even taste the food that is on screen, the report found. More than half (63%) of respondents said smartphone users would be able to feel the shape and texture of digital icons.

4 Authentication Use Cases: Which Protocol To Use?

silver platter passwords exposed authentication hacked vulnerable security breach
Where strong security is a requirement, SAML is generally a good choice. All aspects of the exchange between the RP and IdP can be digitally signed and verified by both parties. This provides high assurance that each party is communicating with the correct counterpart and not an imposter. In addition, the assertion from the IdP may be encrypted, so that HTTPS is not the only protection against attackers accessing users’ data. To add further security, signing and encryption keys may be rotated regularly. To take OIDC to the same level of security requires extra cryptographic keys, as in Open Banking extensions, and this can be relatively onerous to set up and maintain. However, OIDC benefits from the use of JSON and the simpler use by mobile apps, compared to SAML. ... Here, the preference will be for OIDC, as it is likely that a variety of devices, some not browser-based, might be involved, which normally rules out SAML. The built-in consent associated with OIDC enhances the privacy aspects of the data sharing. In addition, the use of signing and encryption may be used to strengthen the security aspects to a degree that adequately meets the requirements of handling such data.

Predictions for AI and ML in 2020

Predictions for AI and ML in 2020 image
The digital skills gap present within workforces has meant that employees are unsure about how to unleash AI’s full potential. But according to SnapLogic CTO, Craig Stewart, this problem could take a step towards being solved next year. “Transparency remains a hot topic and will continue into 2020 as companies aim to ensure transparency, visibility, and trust of AI and AI-assisted decisions,” said Stewart. “We’ll see further development and expansion of the ‘explainable AI movement,’ and efforts like it. ... Even though there are aforementioned worries regarding AI and ML possibly replacing human workers, some experts in digital innovation believe that the gradual inclusion of the technology will end up being a much more collaborative process. “Despite fears that it will replace human employees, in 2020 AI and machine learning will increasingly be used to aid and augment them,” said Felix Gerdes, Insight UK‘s director of digital innovation services. “For instance, customer service workers need to be certain they are giving customers the right advice.

The Future of Spring Cloud's Hystrix Project

Spring Cloud Hystrix Project was built as a wrapper on top of the Netflix Hystrix library. Since then, It has been adopted by many enterprises and developers to implement the Circuit Breaker pattern. In November 2018 when Netflix announced that they are putting this project into maintenance mode, it prompted Spring Cloud to announce the same. Since then, no further enhancements are happening in this Netflix library. In SpringOne 2019, Spring announced that Hystrix Dashboard will be removed from Spring Cloud 3.1 version which makes it officially dead. As the Circuit Breaker pattern has been advertised so heavily, many developers have either used it or want to use it, and now need a replacement. Resilience4j has been introduced to fulfill this gap and provide a migration path for Hystrix users. Resilience4j has been inspired by Netflix Hystrix but is designed for Java 8 and functional programming. It is lightweight compared to Hystrix as it has the Vavr library as its only dependency. Netflix Hystrix, by contrast, has a dependency on Archaius which has several other external library dependencies such as Guava and Apache Commons.

Dubai’s Kentech kicks off digital transformation drive

“Kentech has suffered with poor IT adoption partnerships in the past, so we needed something that was world-class. We wanted something that our business would love and use.” Kentech launched its tendering process early this year and by July it had selected Oracle as its cloud partner. “During the tendering process, we found that our business was closely aligned to construction,” said O’Gara. “Some of our requirements were quite complex, especially when dealing with reimbursable and fixed-price work – they can chop and change on a daily basis. We found that Oracle could meet those complex requirements. “For us, it was the ERP and budgeting models that were the differentiator. We’ve now started implementation and we’re going to go live at the end of this year with the first phase. We’re a project-based business, so we need to be able to scale up and down very quickly. The cloud model suits us perfectly as a business because we can be flexible, rather than going all out and saying ‘I need 10 more servers’.”

Hybrid multi-cloud a must for banks

Banks operating under a hybrid multi-cloud model predictably and optimally manage finances as cost models shift from fixed to variable. Storing data on site with traditional facilities is expensive and holds banks in long-term contracts for a set amount of data storage. Banks over-resource infrastructure and storage leading to payment of unnecessary resources. Hybrid cloud models allow banks to scale as needed, purchasing only what is immediately utilised using a subscription-based model offered by most CSPs. Procurement and implementation in the traditional way is slow and thus capacity management and a degree of guessing are used, resulting in over-capitalised systems offering little ROI. As the cloud allows for scaling on a pay-as-you-go model, the spend is greatly optimised. For example, UBS’s risk management platform is powered by Microsoft Azure, saving the financial service company 40 percent on infrastructure costs, increasing calculation times by 100 percent, and gaining near infinite scale.

The 10 Best Examples Of How Companies Use Artificial Intelligence In Practice

The 10 Best Examples Of How Companies Use Artificial Intelligence In Practice
Today, Waymo wants to bring self-driving technology to the world to not only to move people around, but to reduce the number of crashes. Its autonomous vehicles are currently shuttling riders around California in self-driving taxis. Right now, the company can’t charge a fare and a human driver still sits behind the wheel during the pilot program. Google signaled its commitment to deep learning when it acquired DeepMind. Not only did the system learn how to play 49 different Atari games, the AlphaGo program was the first to beat a professional player at the game of Go. Another AI innovation from Google is Google Duplex. Using natural language processing, an AI voice interface can make phone calls and schedule appointments on your behalf. ... Another innovative way Amazon uses artificial intelligence is to ship things to you before you even think about buying it. They collect a lot of data about each person’s buying habits and have such confidence in how the data they collect helps them recommend items to its customers and now predict what they need even before they need it by using predictive analytics.

Verizon kills email accounts of archivists trying to save Yahoo Groups history

According to the Archive Team: "As of 2019-10-16 the directory lists 5,619,351 groups. 2,752,112 of them have been discovered. 1,483,853 (54%) have public message archives with an estimated number of 2.1 billion messages (1,389 messages per group on average so far). 1.8 billion messages (86%) have been archived as of 2018-10-28." Verizon has issued a statement to the group supporting the Archive Team, telling concerned archivists that "the resources needed to maintain historical content from Yahoo Groups pages is cost-prohibitive, as they're largely unused". The telecoms giant also said the people booted from the service had violated its terms of service and suggested the number of users affected was small. "Regarding the 128 people who joined Yahoo Groups with the goal to archive them – are those people from Archiveteam.org? If so, their actions violated our Terms of Service. Because of this violation, we are unable reauthorize them," Verizon said. 

Open source refers to an online project that is publicly accessible for anyone to modify and share, as long as they provide attribution to the original developer, reported TechRepublic contributor Jack Wallen in What is open source?. Since its release over 20 years ago, open source has changed the internet. Without open source, the online experience would be "a far different place; much more limited, expensive, less robust, less feature-driven and less scalable. Big name companies would be much less powerful and successful as well in the absence of open source software," wrote Scott Matteson in How to decide if open source or proprietary software solutions are best for your business. ... Major tech companies have set their sights on open source development, with Microsoft's acquisition of GitHub and IBM's acquisition of Red Hat. However, developers are concerned about the impact these tech giants could have on the open source community, the report found.  Nearly 41% of respondents said they were concerned about the level of involvement from major tech players in open source. The main concerns they cited involved possible self-serving intentions from big companies, the use of restrictive licenses that give large organizations unfair competitive advantage, and overall trust of large corporations, the report found.

Is cloud migration iterative or waterfall?

Is cloud migration iterative or waterfall?
Cloud migration projects have two dimensions. First, they are short-term sprints where a project team migrates a handful of application workloads and data stores to a single or multicloud. They act independently, with little architectural oversite or governance, and last between two to six months. Second, is the longer-term architecture including security, governance, management, and monitoring. This may be directed by a cloud business office, the office of the CTO, or a master cloud architect. This set of processes goes on continuously. Here is the problem. The former seems to overshadow the latter, meaning that we’re moving to the cloud using ad hoc and decoupled sprints, all with little regard for common security and governance layers and any sort of management and monitoring. The result is something we’ve talked about here before: complexity. Although we built something that seems to work, applications migrated from one platform to another are deployed with different technology stacks.

Quote for the day:

"Without growth, organizations struggle to add talented people. Without talented people, organizations struggle to grow." -- Ray Attiyah

Daily Tech Digest - December 09, 2019

The PC was supposed to die a decade ago. Instead, this happened

Not all that long ago, tech pundits were convinced that by 2020 the personal computer as we know it would be extinct. You can even mark the date and time of the PC's death: January 27, 2010, at 10:00 A.M. Pacific Time, when Steve Jobs stepped onto a San Francisco stage to unveil the iPad. The precise moment was documented by noted Big Thinker Nicholas Carr in The New Republic with this memorable headline: "The PC Officially Died Today." ... And so, here we are, a full decade after the PC's untimely death, and the industry is still selling more than a quarter-billion-with-a-B personal computers every year. Which is pretty good for an industry that has been living on borrowed time for ten years. Maybe the reason the PC industry hasn't suffered a mass extinction event yet is because they adapted, and because those competing platforms weren't able to take over every PC-centric task. So what's different as we approach 2020? To get a proper before-and-after picture, I climbed into the Wayback Machine and traveled back to 2010.

Netflix open sources data science management tool

Netflix open sources data science management tool
Netflix has open sourced Metaflow, an internally developed tool for building and managing Python-based data science projects. Metaflow addresses the entire data science workflow, from prototype to model deployment, and provides built-in integrations to AWS cloud services.  Machine learning and data science projects need mechanisms to track the development of the code, data, and models. Doing all of that manually is error-prone, and tools for source code management, like Git, aren’t well-suited to all of these tasks. Metaflow provides Python APIs to the entire stack of technologies in a data science workflow, from access to the data through compute resources, versioning, model training, scheduling, and model deployment. ... Metaflow does not favor any particular machine learning framework or data science library. Metaflow projects are just Python code, with each step of a project’s data flow represented by common Python programming idioms. Each time a Metaflow project runs, the data it generates is given a unique ID. This lets you access every run—and every step of that run—by referring to its ID or user-assigned metadata.

AppSec in the Age of DevSecOps

Application security as a practice is dynamic. No two applications are the same, even if they belong in the same market domain, presumably operating on identical business use-cases. Some (of the many) factors that cause this variance include technology stack of choice, programming style of developers, a culture of the product engineering team, priority of the business, platforms used, etc. This consequentially results in a wide spectrum of unique customer needs. Take penetration testing as an example. This is a practice area that is presumably well-entrenched, both as a need and as an offering in the application security market. However, in today's age, even a singular requirement such as this could make or break an initial conversation. While, for one prospect, the need could be to conduct the test from a compliance (only) perspective, another's need could stem from a proactive software security initiative. There are many others who have internal assessment teams and often look outside for a third-party view.

Data centers in 2020: Automation, cheaper memory

prediction predict the future crystal ball hand holding crystal ball by arthur ogleznev via unsplash
Storage-class memory is memory that goes in a DRAM slot and can function like DRAM but can also function like an SSD. It has near-DRAM-like speed but has storage capabilities, too, effectively turning it into a cache for SSD. Intel and Micron were working on SCM together but parted company. Intel released its SCM product, Optane, in May, and Micron came to market in October with QuantX. South Korean memory giant SK Hynix is also working on a SCM product that’s different from the 3D XPoint technology Micron and Intel use as well. ... Remember when everyone was looking forward to shutting down their data centers entirely and moving to the cloud? So much for that idea. IDC’s latest CloudPulse survey suggests that 85% of enterprises plan to move workload from public to private environments over the next year. And a recent survey by Nutanix found 73% of respondents reported that they are moving some applications off the public cloud and back on-prem. Security was cited as the primary reason. And since it’s doubtful security will ever be good enough for some companies and some data, it seems the mad rush to the cloud will likely slow a little as people become more picky about what they put in the cloud and what they keep behind their firewall.

Batch Goes Out the Window: The Dawn of Data Orchestration

Add to the mix the whole world of streaming data. By open-sourcing Kafka to the Apache Foundation, LinkedIn let loose the gushing waters of data streams. These high-speed freeways of data largely circumvent traditional data management tooling, which can't stand the pressure. Doing the math, we see a vastly different scenario for today's data, as compared to only a few years ago. Companies have gone from relying on five to 10 source systems for an enterprise data warehouse to now embracing dozens or more systems across various analytical platforms. Meanwhile, the appetite for insights is greater than ever, as is the desire to dynamically link analytical systems with operational ones. The end result is a tremendous amount of energy focused on the need for ... meaningful data orchestration. For performance, governance, quality and a vast array of business needs, data orchestration is taking shape right now out of sheer necessity. The old highways for data have become too clogged and cannot support the necessary traffic. A whole new system is required. To wit, there are several software companies focused intently on solving this big problem. Here are just a few of the innovative firms that are shaping the data orchestration space.

With the majority of companies looking for expertise in the three- to 10-year range, Robinson said they must change their traditional recruitment/training tactics. "The technical skill supply is far less than the demand, so companies are not going to simply be able to meet their exact needs on the open market,'' he said. "There must be a willingness to look outside the normal sources for technical skill, and there must be a willingness to invest in training to get workers up to speed once they are in house." The trend is toward specialization, "but this certainly introduces a financial challenge,'' he said, since most companies cannot afford to build large teams of specialists. So depending on the company's strategy, they may lean more on generalists or they may explore different mixes of internal/external talent. "Even for tech workers who specialize, knowledge across the different areas of IT is necessary for efficient operation of complex systems,'' Robinson said. The primary approach most tech workers are taking for career growth is to deepen their skills in their area of expertise, he said. But they must have knowledge in other areas beyond this, Robinson stressed, especially as tech workers move from a junior level to an architect level.

Seagate doubles HDD performance with multi-actuator technology

big data / data center / server racks / storage / binary code / analytics
The technology is pretty straightforward. Say you have four platters in a disk drive. The actuator controls the drive heads and moves them all in unison over all four platters. Seagate's multi-actuator makes two independent actuators out of one, so in a six-platter drive, the two actuators cover three platters each. ... While SSDs have buried HDDs in terms of performance, they simply can’t match HDDs for capacity. Of course there are multi-terabyte SSDs available, but they cost many times more than the 12TB/14TB HDD drives that Seagate and its chief competitor Western Digital offer. And data centers are not about to go all-SSD yet, if ever. So there's definitely a place for faster HDDs in the data center. Microsoft has been testing Exos 2X14 enterprise hard drives with MACH.2 technology to see if it can maintain the IOPS required for some of Microsoft’s cloud services, including Azure and the Microsoft Exchange Online email service, while increasing available storage capacity per data-center slot.

Synchronizing Cache with the Database using NCache

Caching improves the performance of web applications by reducing resource consumption in applications. It achieves this by storing page output or relatively stale application data across the HTTP requests. Caching makes your site run faster and provide better end-user experience. You can take advantage of caching to reduce the consumption of server resources by reducing the server and database hits. The cache object in ASP.NET can be used to store application data and reduce the expensive server (database server, etc.) hits. As a result, your web page is rendered faster. When you are caching application data, you would typically have a copy of the data in the cache that also resides in the database. Now this duplication of data (both in the database and in the cache) introduces data consistency issues. The data in the cache must be in sync with the data in the database. You should know how data in the cache can be invalidated and removed when any change occurs in the database in real-time.

Coders are the new superheroes of natural disasters

It will be a launching point for open-source programs like Call for Code and "Clinton Global Initiative University" and will support the entire process of creating solutions for those most in need. Call for Code is seeking solutions for this year's challenge and coders can go to the 2019 Challenge Experience to join. Call for Code unites developers and data scientists around the world to create sustainable, scalable, and live-saving open source technologies via the power of Cloud, AI, blockchain and IoT tech. Clinton Global Initiative University partners with IBM and commits to inspiring university students to harness modern, emerging and open-source technologies to develop solutions for disaster response and resilience challenges. "Technology skills are increasingly valuable," Krook said, "even for students who don't intend to become professional software developers. For computer science students, putting the end user first, and empathizing with how they hope to use technology to solve their problems—particularly those that represent a danger to their health and well-being—will help them understand how to build high-quality and well-designed software."

There is a widespread belief that rules, structure and processes inhibit freedom and that organizations that want to build a culture of autonomy and performance need to avoid them like the plague. ... There are times in history when this has happened to entire societies. When the leaders of the French Revolution abolished the laws of the "Ancien Regime", the result was terror. When Russia descended into chaos after the revolution of 1917, the result was civil war and the emergence of a tyrant, Stalin, who began a sustained terror of his own. When the Weimar Republic in Germany failed in the 1920’s, the result was Hitler. In our own time, as social structures weaken, strongmen like Putin or Erdogan come to power and impose personal rules of their own. Societies which abolish laws become chaotic. In chaos, there is absolute freedom. As the philosopher Hegel observed, absolute freedom is not freedom at all, but a playground for the arbitrary exercise of power, which ends in terror. In terror, only a few are free, and many are slaves.

Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche

Daily Tech Digest - December 08, 2019

Machine learning use cases

A brain with two segments, one like a neural network drawing
Machine learning has many potential uses, including external (client-facing) applications like customer service, product recommendation, and pricing forecasts, but it is also being used internally to help speed up processes or improve products that were previously manual and time-consuming. You’ll notice these two types throughout our list of machine learning use cases below. ... This consumer-based use for machine learning applies mostly to smart phones and smart home devices. The voice assistants on these devices use machine learning to understand what you say and craft a response. The machine learning models behind voice assistants were trained on human languages and variations in the human voice, because it has to translate what it hears into words and then make an intelligent, on-topic response. ... This machine–based pricing strategy is most known in the travel industry. Flights, hotels, and other travel bookings usually have a dynamic pricing strategy behind them. Consumers know that the sooner they book their trip the better, but they may not realize that the actual price changes are made via machine learning.

Agile vs DevOps Infographic: Everything You Need To Know

DevOps Methodology, Lifecycle and Best Practices explained
Agile methodology has been widely used by enterprises all across the globe. Software development teams have been using Agile for over a decade now because it provides efficient methods and techniques to build software. Agile methodology is centered around the idea of continuous iteration of development and testing in the software development lifecycle (SDLC). It focuses on iterative, incremental, and evolutionary software development. Agile methodology enables cross-functional teams to collaborate together to deliver value faster, with greater flexibility, quality, and predictability.  ... DevOps is a way of deploying applications to production. It is a deployment model that emphasizes integration, communication, and collaboration among the development and operations teams to enable rapid deployments of software. DevOps focuses on allowing teams to deploy code faster to the production environment, using automated tools and processes. Automation is a critical element of DevOps that improves organizations to deliver applications and services rapidly.

Machine Learning as a Service (MLaaS) is the Next Trend No One is Talking About

Cloud service providers have the ability to ease the application of machine learning into everyday business use. Amazon, Google, and Microsoft all have preliminary services that enable machine learning functions. Speech recognition, sentiment analysis, chatbot enhancement, image and video analysis, and classification and regression services are some of the assistive solutions that are currently provided (“Comparing Machine Learning as a Service: Amazon, Microsoft Azure, Google Cloud AI, IBM Watson.”). As the application of machine learning becomes more valuable, the tech giants will continue to invest in building on top of their machine learning as a service (MLaaS) offerings. By utilizing these services from a cloud provider, companies can expect to save time, money, and resources that would have been invested into creating their own in house solutions. By choosing to use MLaaS, companies can be quicker to market and engage the latest developments in the space, without taking on an extraordinary amount of risk.

4 Ways to Successfully Scale AI and Machine Learning for Businesses

Although it’s apparent that there is a shortage in data science talent on the job market, and hiring for this type of role can be challenging, AI and ML success requires much more than the skills of a data scientist. I’m talking about model building, data prep, training, and interference. If you’re serious about scaling and reaping the benefits that AI and ML have to offer, you should be looking to work with ML architects, data engineers, and operations managers. This piece goes into much more detail about how to structure your data science team. The next challenge is to organize and scale your team effectively. Do you have staff trained with the necessary skills in-house to move this project from concept to completion? Do you build these skills through retraining and hiring? Or will you contract a team to help in completing this project in a pre-determined amount of time? Building up your current team’s skillsets will help you to scale on a long-term basis. Whereas, third-party contractors will help to get your project off the ground with speed and efficiency.

A WebSocket is a bi-directional computing communication channel between a client and server, which is great when the communication is low-latency and high-frequency. Websockets are mainly used in joint, event-driven, or live apps, where the speed of the conventional client-server request-response model doesn’t meet the credentials. Examples include team dashboards and stock trading applications. ... STOMP (Simple Text Oriented Messaging Protocol) was born as an alternative to existing open messaging protocols, like AMQP, to enterprise message brokers from scripting languages like Ruby, Python, and Perl with a subset of common message operations. ... Happily, for Java developers, Spring supports the WebSocket API, which implements raw WebSockets, WebSocket emulation through SocksJS (when WebSockets are not supported), and publish-subscribe messaging through STOMP. In this tutorial, you will learn how to use the WebSockets API and configure a Spring Boot message broker. Then we will authenticate a JavaScript STOMP client during the WebSocket handshake and implement Okta as an authentication and access token service.

Processing Geospatial Data at Scale With Databricks

The first challenge involves dealing with scale in streaming and batch applications. The sheer proliferation of geospatial data and the SLAs required by applications overwhelms traditional storage and processing systems. Customer data has been spilling out of existing vertically scaled geo databases into data lakes for many years now due to pressures such as data volume, velocity, storage cost, and strict schema-on-write enforcement. While enterprises have invested in geospatial data, few have the proper technology architecture to prepare these large, complex datasets for downstream analytics. Further, given that scaled data is often required for advanced use cases, the majority of AI-driven initiatives are failing to make it from pilot to production. ... Databricks offers a unified data analytics platform for big data analytics and machine learning used by thousands of customers worldwide. It is powered by Apache Spark™, Delta Lake, and MLflow with a wide ecosystem of third-party and available library integrations. Databricks UDAP delivers enterprise-grade security, support, reliability, and performance at scale for production workloads.

QR code scams rise in China, putting e-payment security in spotlight

QR codes, originally invented by Japan’s car parts maker Denso, have become so ubiquitous in China that even street hawkers now use them for electrical payments, as seen here in Beijing. Photo: Weibo
The suspect had replaced legitimate codes created by merchants with fake ones embedded with a virus programmed to steal the personal information of consumers. Scammers have also been profiting handsomely from the mainland’s multibillion dollar bike-sharing industry. By replacing the original QR code used to unlock the bicycle with a fake one, they have been able to cheat users into transferring their money into their own bank accounts. The proliferation of this type of crime has been made possible by the explosion of mobile payments in China, as the concept of a cashless society moves ever closer to becoming a reality. Nowhere is this shift more evident than in the abundance of QR codes – a type of barcode, or machine-readable image – that allow consumers to make small payments by simply scanning the image and confirming the transaction. QR codes were invented in 1994 by Denso Wave, a unit of Japan’s largest automotive parts maker, to allow for quick scanning when tracking vehicles during the assembly process. From the car factory, the codes later spread to broader usage, encompassing everything from consumer purchases to social media.

The hidden risks of cryptojacking attacks

The rise of cryptojacking has followed the same upward trajectory as the value of cryptocurrency. Suddenly, digital “cash” is worth actual money and hackers, who usually have to take several steps to generate income from stolen data, have a direct path to cashing in on their exploits. But if all the malware does is sit quietly in the background generating cryptocurrency, is it really much of a danger? In short, yes – for two reasons. In fundamental terms, cryptojacking attacks are about stealing… in this case energy and system resources. The energy might be minimal (more about that in a moment) but using resources slows the performance of the overall system and actually increases wear and tear on the hardware, reducing its lifespan, resulting in frustration, inefficiency and increased costs. Much more importantly however, a cryptojacking-compromised system is a flashing warning sign that a vulnerability exists. Often, infiltrating a system to cryptojack involves opening access points that can be easily leveraged to steal other types of data.

Developing Deep Learning Models for Chest X-rays with Adjudicated Image Labels

For very large datasets consisting of hundreds of thousands of images, such as those needed to train highly accurate deep learning models, it is impractical to manually assign image labels. As such, we developed a separate, text-based deep learning model to extract image labels using the de-identified radiology reports associated with each X-ray. This NLP model was then applied to provide labels for over 560,000 images from the Apollo Hospitals dataset used for training the computer vision models. To reduce noise from any errors introduced by the text-based label extraction and also to provide the relevant labels for a substantial number of the ChestX-ray14 images, approximately 37,000 images across the two datasets were visually reviewed by radiologists. These were separate from the NLP-based labels and helped to ensure high quality labels across such a large, diverse set of training images.

Injection Vulnerabilities – 20 Years and Counting

Lack of awareness: it’s common to see junior software engineers writing code that is vulnerable. The “injection” concept may be not very intuitive for them, and they deliver vulnerable code because it’s the easiest / fastest way for them to implement a specific component. Rush: we all know how stressful and demanding modern software development environments can be. Concepts like Agile and CI/CD are great for fast delivery, but when developers are focused only on delivering the code, they might forget to check for security issues. Complexity: APIs and modern apps are complex. A modern app, like Uber for example, might look very simple from the UX (user experience) perspective, but on the backend there are many databases and microservices that communicate between them behind the scenes. In many cases, it’s hard to track which inputs come from the client itself and require more security attention (as filtering and scanning), and which inputs are internal to the system.

Quote for the day:

"Increasingly, management's role is not to organize work, but to direct passion and purpose." -- Greg Satell

Daily Tech Digest - December 07, 2019

Why a computer will never be truly conscious

Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work. The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier. Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons.

cloud server rack
One of the six customers impacted by the ransomware infection is FIA Tech, a financial and brokerage firm. Teh ransomware caused on outage of FIA Tech cloud services. In a message to customers, FIA Tech said "the attack was focused on disrupting operations in an attempt to obtain a ransom from our data center provider." FIA Tech did not name the data center provider, but a quick search identifies it as CyrusOne. We've been told by a source close to CyrusOne that the data center provider does not intend to pay the ransom demand, barring any future unforeseen developments. The company owns 45 data centers in Europe, Asia, and the Americas, and has more than 1,000 customers. It is also considering a sale after receiving takeover interest over the summer, according to Bloomberg. CyrusOne is a publicly-traded, NASDAQ-listed company. In an SEC filing last year, the company explicitly listed "ransomware" as a risk factor for its business.

Costs are likely to continue to improve as, among other things, companies reduce the level of pricey cobalt in battery components and achieve manufacturing improvements as production volumes rise. But metals mining is already a mature process, so further declines there are likely to slow rapidly after 2025 as the cost of materials makes up a larger and larger portion of the total cost, the report finds. Deeper cost declines beyond 2030 are likely to require shifts from the dominant lithium-ion chemistry today to entirely different technologies, like lithium-metal, solid-state and lithium-sulfur batteries. Each of these are still in much earlier development stages, so it’s questionable whether any will be able to displace lithium-ion by 2030, Field says. Gene Berdichevsky, chief executive of anode materials maker Sila Nanotechnologies, agrees it will be hard for the industry to consistently break through the $100/kWh floor with current technology. But he also thinks the paper discounts some of the nearer-term improvements we’ll see in lithium-ion batteries without full-fledged shifts to different chemistries.

Banking as a Platform- The Future is Now!

No matter what type or size a platform offering may be, some of the following will be a must. Embedded analytics which runs like an undercurrent and omnipresent will become a hygiene requirement to have and will also play a major role in revenue generation and profitability from the platform. AI and ML will be key differentiators in enhancing user experience and operational efficiency too leading to monetary benefits. BaaP’s DNA will be defined by how well is the API strategy of the bank and a complete agility in usage of APIs will be the new norm from the business teams. Scaling, multiple usage, Data privacy and cyber security compounded with regulatory guidelines will be quite crucial for the smooth and safe functioning of BaaP and these two aspects will be central to any decision making by banks. It may sound little too audacious to talk about future of platform banking which is still in nascent stage. But history always serves a great recipe to predict future (we are taking the data analytics route!).

AI Policies Are Setting Stage To Transform Healthcare But More Is Needed

New data standards proposals by Health and Human Services will empower patients and lead to better, faster diagnosis. The proposal would require electronic medical record (EMR) companies to provide portals called APIs for patients to access and share their health data. Currently it is ridiculously complex and expensive for patients to get copies of their own records. Shockingly, it may cost over $500 to get your medical record. Accessibility and data sharing are critical for better, faster diagnosis and treatments. AI has been used to predict heart attacks five years into the future. It is also able to predict who is at the greatest risk for suffering from depression. The new standards would put the patient in control of who uses their data and for what purposes. Entrepreneurial companies are leveraging venture dollars to build the best AI capabilities in the world, but they need access to health data to prove their benefits to patients. Patients should be able to choose how their data is used.

Why A Human Firewall Is The Biggest Defence Against Data Breach

Hackers are targeting servers that haven’t been set up correctly, giving them access to sensitive data with minimal effort. Cloud-based systems such as Office365 don’t have multi-factor authorisation, or web-based systems that are not patched result in vulnerabilities that can be exploited. Also, sometimes hardware such as firewalls can be configured incorrectly, or poor security settings on individual devices, can lead to loopholes that can be exploited. ... A hacker only needs to gain access to one user’s account, to then gain control and access the compromised network and data. An approach known as the “known good” model works in a way where anomalies that stray from the established normal baseline are identified and highlighted as a potential threat and cyber-attack. Business leaders are widely criticised and held accountable for failing to protect their consumer’s data especially in the light of the vast IT and training budgets that are at their disposal, yet it is the daily performance of front-line staff that reveal the true strengths and weaknesses within any organisation.

Two Russians indicted over Dridex and Zeus malware

“Sitting quietly at computer terminals far away, these cyber criminals allegedly stole tens of millions of dollars from unwitting members of our business, non-profit, governmental, and religious communities. “Each and every one of these computer intrusions was, effectively, a cyber-enabled bank robbery. We take such crimes extremely seriously and will do everything in our power to hold these criminals to justice.” The losses incurred through the activities of Yakubets’ group – known as Evil Corp – totalled hundreds of millions of pounds in both the UK, the US, and other countries. Additional investigations in the UK targeted a network of money launderers who funnelled profits back to Evil Corp, for which eight people have already gone to prison. Other intelligence supplied through UK law enforcement has helped support sanctions brought against the group by the US Treasury’s Office of Foreign Asset Control. The NCA described the operation as a sophisticated and technically skilled one, which represented one of the most significant cyber crime threats ever faced in the UK.

Your Privacy Could Be at Risk Without These Updates to Behavioral Biometrics

Mastercard is one of the major brands investing in passive biometrics. The goal is to determine the probability that the authenticated user is present during the respective interactions. The credit card provider’s system evaluates more than 300 signals to make a conclusion. They include how a person navigates around a site on their device or the amount of pressure they put on a touch-sensitive screen. Passive behavioral biometrics measurements also allow catching some strange behaviors that might not immediately become apparent through small samples of data. For example, if a person typically uses the scroll wheel on a mouse to navigate, but then switches to using keyboard commands, that change could indicate someone else has gotten access to a system and is using it fraudulently. Keep in mind that passive and active biometrics both have associated pros and cons. No single method works best in every case. However, the use of passive biometrics to gauge probabilities is relatively new. Since well-known brands like Mastercard are working with it, there’s a good chance this option will become even more prominent.

FBI recommends that you keep your IoT devices on a separate network

"Your fridge and your laptop should not be on the same network," the FBI's Portland office said in a weekly tech advice column. "Keep your most private, sensitive data on a separate system from your other IoT devices," it added. ... The reasoning behind it is simple. By keeping all the IoT equipment on a separate network, any compromise of a "smart" device will not grant an attacker a direct route to a user's primary devices -- where most of their data is stored. Jumping across the two networks would require considerable effort from the attacker. However, placing primary devices and IoT devices on separate networks might not sound that easy for non-technical users. The simplest way is to use two routers. The smarter way is to use "micro-segmentation," a feature found in the firmware of most WiFi routers, which allows router admins to create virtual networks (VLANs). VLANs will behave as different networks; even they effectively run on the same router. While isolating IoT devices on their own network is the best course of action for both home users and companies alike, this wasn't the FBI's only advice on dealing with IoT devices.

Usability Testing and Testing APIs with Hallway Testing

Hallway testing can be described as using "random" persons or group of people to test software products and interfaces. "Randomness" of a person depends on what we are trying to test. Marchewka suggested trying to engage people who will be using the product (i.e. members of the target group) to get the best understanding of how they will do that. For their hallway testing session they invite a truly random group of people if they are checking mobile app, and a random group of API users if they are verifying UX of an API. Using the specific background and experience of all the people taking part in a particular session of hallway testing, we can uncover all inefficiencies of the user interface in a tested product, said Marchewka. The app or software does not need to have a GUI to benefit from hallway testing; it can be used as part of API prototyping activity, as Marchewka explained. Consumers of API can be asked to use an early version during a hallway testing session; for example, creators can find out if methods are named correctly.

Quote for the day:

"Courage is leaning into the doubts and fears to do what you know is right even when it doesn't feel natural or safe." -- Lee Ellis

Daily Tech Digest - December 05, 2019

The Rise Disappearance And Retirement Of Google Co-Founders

USA - Techonology - Google Introduces T-Mobile G1
It’s a fitting end for two of the most mysterious tech leaders of a generation, who are both exiting their company as it hovers near $1 trillion in market cap. But it’s also a troubling time for Google. The search giant has faced increasing scrutiny from employees, media organizations, activists, regulators, and lawmakers since Page and Brin first stepped back in the summer of 2015. And many of those controversies are problems of Page and Brin’s creation, either because the duo didn’t foresee the ways in which Google could do harm or because they explicitly steered the company in a direction that flouted standard corporate ethics. In that context, it’s important to look back at the big moments in both men’s careers and how the actions they took have had an outsized impact not just on the tech industry, but on the internet and society itself. What Page and Brin have built will likely last for decades to come, and knowing how Google got to where it is today will be an important piece in the puzzle of figuring out where it goes in the future. ... Although Google is now one of the most powerful forces in online advertising on the planet, Page and Brin weren’t too keen on turning their prototype search engine into an ad-selling machine, at first.

Planning for an intelligent future

Compared to current planning activities, which invariably work on pre-defined cycles such as weekly or monthly processes, intelligent planning can be considered to have more of an ‘always-on’ approach. ... As such, any business that has access to data that exceeds the volume that humans can analyse and understand, will need intelligent planning to remain competitive. For example, a large retail organisation can harvest data from millions of daily transactions to make better buying, customer engagement, and operational decisions. But they don’t need to stop at short-term future actions; instead they should consider using social media sentiment and detailed demographics to make longer term, strategic decisions around areas such as range, store locations and customer experience. Financial services is another prime candidate for intelligent planning, particularly where understanding and influencing consumer behaviour is involved, for anything from calculating the probability of a customer renewing their insurance policy; the likelihood of a loan holder defaulting on their payments; or the future spending profile of credit card customers.

How AI is Transforming the Banking Sector

Banks are always under intense pressure from regulatory bodies to enforce the most recent regulations. These regulations are there to protect the banks and customers from fraudulent activities while at the same time, reducing financial crimes like money laundering, tax evasion, and terrorism financing. AI in banking also helps ensure that banks are compliant with the most recent regulations. AI relies on cognitive fraud analytics that watches customer behaviors, track transactions, recognize dubious activities and assess the data of different compliance systems. Businesses can remain up to date with compliance rules and regulations through the use of AI. AI systems can read compliance requirements and detect any changes in the requirements through deep learning and natural language processing. Through this, banks can remain on top of ever-evolving regulatory requirements and align their own regulations with them. Through technologies like analytics, deep learning, and machine learning, banks can remain compliant with regulations.

Augmented reality in retail: Virtual try before you buy

“Nike Fit is a transformative solution and an industry first—using a digital technology to solve for massive customer friction,” Nike writes in its press release for the launch of the app. “In the short term, Nike Fit will improve the way Nike designs, manufactures, and sells shoes—product better tailored to match consumer needs. A more accurate fit can contribute to everything from less shipping and fewer returns to better performance.” ... “The fashion industry has not traditionally been geared toward helping people understand how clothes will actually fit,” the company writes in its press release. “Gap is committed to winning customer trust by consistently presenting and delivering products that make customers look and feel great, and we are using technology to get there.” ... As the technology evolves and gives users more and more accurate renderings of how digital objects look in physical spaces, I expect that more and more brands and industries will hop onto the AR marketing bandwagon. From fashion and accessories to footwear and home décor, and beyond, AR has the potential to transform and completely reimagine customer experiences.

Why the sheer scale of DevOps testing now needs machine learning

The potential value of machine learning is particularly evident in mobile and web app testing because these are very fragmented and complex platforms to handle and understand. What ML can do in this context is to keep all those platforms visible, connected, and in a ready-state mode. In a test lab, ML helps to surface when something is outdated, disconnected from WiFi, or another problem – and moreover, help understand why that has happened. Another way in which ML helps is through showing trends and patterns, helping to not only visualise all that data but provide further insight and make sense of what has happened over the past weeks or months. For instance, it can identify the most problematic functional area in an application, such as the top 5 failing tests over the past 2-3 testing cycle, or which mobile/web platforms have been most error-prone over the past cycles. Was a failure caused by the lab, was it a pop-up, or a security alert? This really matters. Teams invest time, resources and money in automating test activities, but where all this really has an impact and add value is at the reporting stage.

The 10 most important cyberattacks of the decade

Yahoo deserves the first mention because of the sheer size of its breach and the damaging effect it had on the company's ability to compete as an email and search engine platform. In 2013, all three billion of Yahoo's accounts were compromised, making the breach the largest in the history of the internet. It took the company three years to notify the public that everyone's names, email addresses, passwords, birth dates, phone numbers and security answers had been sold on the Dark Web by hackers.  Security experts say the Yahoo breach is notable because of how it was mishandled by the company and the devastating effect it had on Verizon's $4.8 billion acquisition. Yahoo initially discovered that a breach occurred in 2015 exposing 500 million accounts. ... The size of the Equifax breach pales in comparison to the value of the data exposed to hackers. As one of America's largest credit bureaus, the company had the most sensitive data on hundreds of millions of people. Hackers gained access to the information of 143 million Equifax customers, including their names, birth dates, drivers' license numbers, Social Security numbers and addresses. More than 200,000 credit card numbers were released and 182,000 documents with personally identifying information was accessed by cybercriminals.

To stop a tech apocalypse we need ethics and the arts

Matt Reaney, the chief executive and founder of Big Cloud – a recruitment company that specialises in data science, machine learning and AI employment – has argued that technology needs more people with humanities training. [The humanities] give context to the world we operate in day to day. Critical thinking skills, deeper understanding of the world around us, philosophy, ethics, communication, and creativity offer different approaches to problems posed by technology. Reaney proposes a “more blended approach” to higher education, offering degrees that combine the arts and STEM. Another advocate of the interdisciplinary approach is Joseph Aoun, President of Northeastern University in Boston. He has argued that in the age of AI, higher education should be focusing on what he calls “humanics”, equipping graduates with three key literacies: technological literacy, data literacy and human literacy. The time has come to answer the call for humanities graduates capable of crossing over into the world of technology so that our human future can be as bright as possible.

Learning algorithms and the self-supervised machine with Dr. Philip Bachman

Supervised learning is sort of what’s had the most immediate success and what’s driving a lot of the deep learning power technologies that are being used for doing things like speech recognition in phones or doing automated question answering for chat bots and stuff like that. So supervised learning refers to kind of a subset of the techniques that people apply when they have access to a large amount of data and they have a specific type of action that they want a model to perform when it processes that data. And what they do is, they get a person to go and label all the data and say, okay, well this is the input to the model at this point in time. And given this input, this is what the model should output. So you’re putting a lot of constraints on what the model is doing and constructing those constraints manually by having a person looking at a set of a million images and, for each image, they say, oh, this is a cat, this is a dog, this is a person, this is a car.

What a cloud-native approach to RPA could mean to your business

Enterprise’s affinity to cloud computing hasn’t traditionally been reflected by the RPA industry. That is, until now – with the world’s first cloud-native RPA platform, we’re bringing the advantages of cloud-native, intelligent RPA deployments to organisations worldwide. For business users, cloud-native RPA operates as a self-service technology accessed via a web-based graphical interface from anywhere. With a single click or drag-and-drop motion, users can automate those parts of any job that don’t require human creativity, problem-solving capabilities, empathy, or judgment. Just as with popular Software-as-a-Service (SaaS) apps, users can create what they need using an intuitive web interface within the browser. For many common bots, no coding is required. There are no large client downloads to install and manage or commands to memorise; automation and processes are exposed via drag-and-drop functionality and flow charts. Also, because there is no software client, IT doesn’t have to get involved. Infrastructure management costs go away, significantly reducing the total cost of ownership (TCO). 

cyber security abstract wall of locks padlocks
At an even more fundamental level, anyone looking at the state of enterprise security today understands that whatever we’re doing now isn’t working. “The perimeter-based model of security categorically has failed,” says Forrester principal analyst Chase Cunningham. “And not from a lack of effort or a lack of investment, but just because it’s built on a house of cards. If one thing fails, everything becomes a victim. Everyone I talk to believes that.” Cunningham has taken on the zero-trust mantle at Forrester, where analyst Jon Kindervag, now at Palo Alto Networks, developed a zero-trust security framework in 2009. The idea is simple: trust no one. Verify everyone. Enforce strict access-control and identity-management policies that restrict employee access to the resources they need to do their job and nothing more. Garrett Bekker, principal analyst at the 451 Group, says zero trust is not a product or a technology; it’s a different way of thinking about security. “People are still wrapping their heads around what it means. Customers are confused and vendors are inconsistent on what zero trust means. But I believe it has the potential to radically alter the way security is done.”

Quote for the day:

"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." -- John Donahoe

Daily Tech Digest - December 04, 2019

10 bad programming habits we secretly love

9 bad programming habits we secretly love
For the last decade or so, the functional paradigm has been ascending. The acolytes for building your program out of nested function calls love to cite studies showing how the code is safer and more bug-free than the older style of variables and loops, all strung together in whatever way makes the programmer happy. The devotees speak with the zeal of true believers, chastising non-functional approaches in code reviews and pull requests. They may even be right about the advantages. But sometimes you just need to get out a roll of duct tape. Wonderfully engineered and gracefully planned code takes time, not just to imagine but also to construct and later to navigate. All of those layers add complexity, and complexity is expensive. Developers of beautiful functional code need to plan ahead and ensure that all data is passed along proper pathways. Sometimes it’s just easier to reach out and change a variable. Maybe put in a comment to explain it. Even adding a long, groveling apology to future generations in the comment is faster than re-architecting the entire system to do it the right way.

New AI.jpg
Advancements in explainable AI will continue in 2020 and beyond as new standards are developed around the technical definition of explainability, slowly followed by new technologies to address the explainability problem for business leaders non-technical audiences. In real estate, for example, offering a compelling explanation for why a mortgage application was rejected by an AI-driven platform will eventually be a necessity as AI adoption continues. Although we’ll see evolving technical tools and standards, progress for layperson tools will be slower with some narrow and domain-specific solutions (e.g., non-technical explainability for finance) emerging first. Like the general public’s understanding of ‘the web’ in the 90s, awareness, understanding and trust in AI will gradually increase as the capabilities and use of the technology spreads. Using sophisticated tooling to automate what we would call human creativity is now commonly referred to as AI. However, the term has become almost meaningless as “AI” as now covers everything from predictive analytics to Amazon Echo speakers. The industry needs to get their arms around real AI.

Volkswagen Is Accelerating One Of The World’s Biggest Smart-Factory Projects

VW factory
The biggest challenge, says Jean-Pierre Petit, Capgemini’s director of digital manufacturing, in an emailed comment to Forbes, is to “cross the chasm” from an initial pilot in a single plant to full-scale deployments, which is where the real benefits of digitization kick in. In particular, smart-factory projects require IT teams to work closely with “operational technology” (OT) groups managing machinery and other tech inside factories. Often, OT teams have become used to working quite independently and may resist IT’s efforts to drive change. By working closely together on VW’s industrial cloud project, Hofmann and Walker are sending a strong signal to their respective teams about the need for tight collaboration. The decision to launch pilots at several factories this year rather than just one was also deliberate. “You can put a ton of slides up [about the industrial cloud], but nobody is interested in that,” says Dirk Didascalou, one of the senior AWS executives involved in the project. “They need to see it working first.”

The question that helps businesses overcome unconscious bias

In the workplace, when you’re considering someone for a project or a promotion, turn that mantra into a question: What do I know about this person? You may have a feeling that this person is someone you do or don’t like or connect with, or a sense that this person “is ready for” and “deserves” the opportunity. Guided by that sense, you can easily pick and choose facts from their experience and work records to reinforce your decision. But when you start only with facts, a different picture can emerge. So drill down exclusively on what’s concrete. What projects did this person take part in or help lead, and how successful were they? What do the 360-degree assessments of this person show? What demonstrable impact did this person’s work have on sales, revenues, morale? Sometimes, the facts will back up a general sense that you have, or a description that someone else gave you. 

Programmers and developer teams are coding and developing software
It's almost a cliché to point out how so much of software today is built on or with open source. But Ian Massingham recently reminded me that for all the attention we lavish on back-end technologies--Linux, Docker containers, Kubernetes, etc.--front-end open source technologies actually claim more developer attention.  Much of the front-end magic open source software that developers love today was born at early web giants like Google and Facebook. Frameworks for the front make it possible for Facebook, Google, LinkedIn, Pinterest, Airbnb, and others to iterate quickly, scale, deliver consistent fast responsiveness and, in general, mostly delight their users. Indeed, their entire businesses depend on great user experiences. While venture investors historically have plowed their funds into back-end startups creating open source software, the same is not nearly as true with the front-end. Accel, Benchmark, Greylock, and other top-tier VCs made fortunes on backing enterprise open source software startups like Heroku, MuleSoft, Red Hat, and many more.

Migrating to GraphQL at Airbnb

Two GraphQL features Airbnb relied upon during this early stage were aliasing and adapters. Aliasing allowed mapping between camel-case properties returned from GraphQL and snake-case properties of the old REST endpoint. Adapters were used to convert a GraphQL response so that it could be recursively diffed with a REST response, and ensure GraphQL was returning the same data as before. These adapters would later be removed, but they were critical for meeting the parity goals of the first stage. Stage two focuses on propagating types throughout the code, which increases confidence during later stages. At this point, no runtime behavior should be affected. The third stage improves the use of Apollo. Earlier stages directly used the Apollo Client, which fired Redux Actions, and components used the Redux store. Refactoring the app using React Hooks allows use of the Apollo cache instead of the Redux store.  A major benefit of GraphQL is reducing over-fetching.

ASP.NET Core Microservices: Getting Started

Open avocado
Let's consider that we're exploring microservices architecture, and we want to take advantage of polyglot persistence to use a NoSQL database (Couchbase) for a particular use case. For our project, we're going to look at a Database per service pattern, and use Docker (docker-compose) to manage the database for the ASP.NET Core Microservices proof of concept. This blog post will be using Couchbase Server, but you can apply the basics here to the other databases in your microservices architecture as well. I'm using ASP.NET Core because it's a cross-platform, open-source framework. Additionally, Visual Studio (while not required) will give us a few helpful tools for working with Docker and docker-compose. But again, you can apply the basics here to any web framework or programming language of your choice. I'll be using Visual Studio for this blog post, but you can achieve the same effect (with perhaps a little more work) in Visual Studio Code or plain old command line.

Amazon Just Joined The Race To Dominate Quantum Computing In The Cloud

People pass by AWS (Amazon Web Services) stand during the...
AWS is something of a latecomer to the quantum cloud. IBM kicked off the trend several years ago, and since then a wave of other companies have unveiled cloud-based offerings, including Amazon’s partners D-Wave and Rigetti. Nor is AWS the first cloud provider to offer access to a range of other companies’ quantum hardware: Microsoft took that honor when it launched its Azure Quantum cloud offering last month. Yet AWS is likely to become a force to be reckoned with in the field because of a unique advantage it has over its rivals. ... AWS became a cloud powerhouse because many of the services it now offers were initially developed for Amazon’s vast commercial empire. The same scenario could well play out with quantum computing. For instance, one of the things quantum machines are particularly good at is optimizing delivery routes. AWS could—quite literally—road test a quantum-powered service that lets Amazon plot the most efficient directions for its delivery vehicles to take as they drop off parcels. The machines could also help Amazon optimize the way goods flow through its vast warehouse network.

Simplifying data management in the cloud

Simplifying data management in the cloud
Attempting to leverage the approaches and tools we use today will add complexity until the systems eventually collapse from the weight of it. Just think of the number of tools in your data center today that cause you to ask “what were they thinking?” Indeed, they were thinking much the same way we’re thinking today, including looking for tactical solutions that will eventually not provide the value they once did—and in some cases providing negative value.  I’ve come a long way to make a pitch to you, but as I’m thinking about how we solve this issue, an approach seems to pop up over and over as the best likely solution. Indeed, it’s been kicked around in different academic circles. It’s the notion of self-identifying data. I’ll likely hit this topic again at some point, but here’s the idea: Take the autonomous data concept a few steps further by embedding more intelligence with the data and more knowledge about the data itself. We would gain the ability to have all knowledge around the use of the data available by the data itself, no matter where it’s stored, or where the information is requested.

Survey: IT pros see career potential in as-a-Service trend

IT pros over 55 are most concerned with data complexity slowing down future data migrations. One question in the survey suggests that instead of tearing down data silos, cloud migration projects may create new ones. Seventy-seven percent of respondents saythat data is siloed between public and private clouds. Miller said to avoid this organizations need to choose the aaS model that makes the most business and policy sense. "Companies need to adopt a model that is not tied to one cloud or one premise but has the flexibility to move data and applications to where business needs are best met," he said. "If you adopt the right aaS model, you're breaking down the silos and driving overall efficiencies." While the majority of companies state that they have implemented at least some aaS projects, 66% of respondents say that IT pros avoid this new way of working out of fear of losing their jobs. The younger respondents (ages 22 to34) were most likely to think this at 70%, compared to 67% of 35 to 54yearolds and only 45% of 55+ year-olds.

Quote for the day:

"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell