Machine learning has many potential uses, including external (client-facing) applications like customer service, product recommendation, and pricing forecasts, but it is also being used internally to help speed up processes or improve products that were previously manual and time-consuming. You’ll notice these two types throughout our list of machine learning use cases below. ... This consumer-based use for machine learning applies mostly to smart phones and smart home devices. The voice assistants on these devices use machine learning to understand what you say and craft a response. The machine learning models behind voice assistants were trained on human languages and variations in the human voice, because it has to translate what it hears into words and then make an intelligent, on-topic response. ... This machine–based pricing strategy is most known in the travel industry. Flights, hotels, and other travel bookings usually have a dynamic pricing strategy behind them. Consumers know that the sooner they book their trip the better, but they may not realize that the actual price changes are made via machine learning.
Agile methodology has been widely used by enterprises all across the globe. Software development teams have been using Agile for over a decade now because it provides efficient methods and techniques to build software. Agile methodology is centered around the idea of continuous iteration of development and testing in the software development lifecycle (SDLC). It focuses on iterative, incremental, and evolutionary software development. Agile methodology enables cross-functional teams to collaborate together to deliver value faster, with greater flexibility, quality, and predictability. ... DevOps is a way of deploying applications to production. It is a deployment model that emphasizes integration, communication, and collaboration among the development and operations teams to enable rapid deployments of software. DevOps focuses on allowing teams to deploy code faster to the production environment, using automated tools and processes. Automation is a critical element of DevOps that improves organizations to deliver applications and services rapidly.
Cloud service providers have the ability to ease the application of machine learning into everyday business use. Amazon, Google, and Microsoft all have preliminary services that enable machine learning functions. Speech recognition, sentiment analysis, chatbot enhancement, image and video analysis, and classification and regression services are some of the assistive solutions that are currently provided (“Comparing Machine Learning as a Service: Amazon, Microsoft Azure, Google Cloud AI, IBM Watson.”). As the application of machine learning becomes more valuable, the tech giants will continue to invest in building on top of their machine learning as a service (MLaaS) offerings. By utilizing these services from a cloud provider, companies can expect to save time, money, and resources that would have been invested into creating their own in house solutions. By choosing to use MLaaS, companies can be quicker to market and engage the latest developments in the space, without taking on an extraordinary amount of risk.
Although it’s apparent that there is a shortage in data science talent on the job market, and hiring for this type of role can be challenging, AI and ML success requires much more than the skills of a data scientist. I’m talking about model building, data prep, training, and interference. If you’re serious about scaling and reaping the benefits that AI and ML have to offer, you should be looking to work with ML architects, data engineers, and operations managers. This piece goes into much more detail about how to structure your data science team. The next challenge is to organize and scale your team effectively. Do you have staff trained with the necessary skills in-house to move this project from concept to completion? Do you build these skills through retraining and hiring? Or will you contract a team to help in completing this project in a pre-determined amount of time? Building up your current team’s skillsets will help you to scale on a long-term basis. Whereas, third-party contractors will help to get your project off the ground with speed and efficiency.
The first challenge involves dealing with scale in streaming and batch applications. The sheer proliferation of geospatial data and the SLAs required by applications overwhelms traditional storage and processing systems. Customer data has been spilling out of existing vertically scaled geo databases into data lakes for many years now due to pressures such as data volume, velocity, storage cost, and strict schema-on-write enforcement. While enterprises have invested in geospatial data, few have the proper technology architecture to prepare these large, complex datasets for downstream analytics. Further, given that scaled data is often required for advanced use cases, the majority of AI-driven initiatives are failing to make it from pilot to production. ... Databricks offers a unified data analytics platform for big data analytics and machine learning used by thousands of customers worldwide. It is powered by Apache Spark™, Delta Lake, and MLflow with a wide ecosystem of third-party and available library integrations. Databricks UDAP delivers enterprise-grade security, support, reliability, and performance at scale for production workloads.
The suspect had replaced legitimate codes created by merchants with fake ones embedded with a virus programmed to steal the personal information of consumers. Scammers have also been profiting handsomely from the mainland’s multibillion dollar bike-sharing industry. By replacing the original QR code used to unlock the bicycle with a fake one, they have been able to cheat users into transferring their money into their own bank accounts. The proliferation of this type of crime has been made possible by the explosion of mobile payments in China, as the concept of a cashless society moves ever closer to becoming a reality. Nowhere is this shift more evident than in the abundance of QR codes – a type of barcode, or machine-readable image – that allow consumers to make small payments by simply scanning the image and confirming the transaction. QR codes were invented in 1994 by Denso Wave, a unit of Japan’s largest automotive parts maker, to allow for quick scanning when tracking vehicles during the assembly process. From the car factory, the codes later spread to broader usage, encompassing everything from consumer purchases to social media.
The rise of cryptojacking has followed the same upward trajectory as the value of cryptocurrency. Suddenly, digital “cash” is worth actual money and hackers, who usually have to take several steps to generate income from stolen data, have a direct path to cashing in on their exploits. But if all the malware does is sit quietly in the background generating cryptocurrency, is it really much of a danger? In short, yes – for two reasons. In fundamental terms, cryptojacking attacks are about stealing… in this case energy and system resources. The energy might be minimal (more about that in a moment) but using resources slows the performance of the overall system and actually increases wear and tear on the hardware, reducing its lifespan, resulting in frustration, inefficiency and increased costs. Much more importantly however, a cryptojacking-compromised system is a flashing warning sign that a vulnerability exists. Often, infiltrating a system to cryptojack involves opening access points that can be easily leveraged to steal other types of data.
For very large datasets consisting of hundreds of thousands of images, such as those needed to train highly accurate deep learning models, it is impractical to manually assign image labels. As such, we developed a separate, text-based deep learning model to extract image labels using the de-identified radiology reports associated with each X-ray. This NLP model was then applied to provide labels for over 560,000 images from the Apollo Hospitals dataset used for training the computer vision models. To reduce noise from any errors introduced by the text-based label extraction and also to provide the relevant labels for a substantial number of the ChestX-ray14 images, approximately 37,000 images across the two datasets were visually reviewed by radiologists. These were separate from the NLP-based labels and helped to ensure high quality labels across such a large, diverse set of training images.
Lack of awareness: it’s common to see junior software engineers writing code that is vulnerable. The “injection” concept may be not very intuitive for them, and they deliver vulnerable code because it’s the easiest / fastest way for them to implement a specific component. Rush: we all know how stressful and demanding modern software development environments can be. Concepts like Agile and CI/CD are great for fast delivery, but when developers are focused only on delivering the code, they might forget to check for security issues. Complexity: APIs and modern apps are complex. A modern app, like Uber for example, might look very simple from the UX (user experience) perspective, but on the backend there are many databases and microservices that communicate between them behind the scenes. In many cases, it’s hard to track which inputs come from the client itself and require more security attention (as filtering and scanning), and which inputs are internal to the system.
Quote for the day:
"Increasingly, management's role is not to organize work, but to direct passion and purpose." -- Greg Satell