Machine learning can be the IT pro’s best friend; they just need to realize how it can be used to make their jobs easier." This makes sense because the use of machine learning will be a “crawl-walk-run” for most organizations, as they will apply it in phases. The first phase will be using it to describe something. It analyzes the data and helps interpret it. The next phase is more cognitive where the AI can start to solve problems. The third phase will see the technology start to predict things. For example, it could perhaps predict that a security breach is going to occur based on other data. The last phase, and we are years away from this, is prescriptive where the AI is able to predict things and then take action to remediate the action. In the previous example, it could not only predict a breach, but it could then take the necessary steps to ensure it doesn’t happen. For this to occur, the AI would use itself in an iterative manner.
Most of the value of deep learning today is in narrow domains where you can get a lot of data. Here’s one example of something it cannot do: have a meaningful conversation. There are demos, and if you cherry-pick the conversation, it looks like it’s having a meaningful conversation, but if you actually try it yourself, it quickly goes off the rails. In fact, anything that’s a bit too much open domain is beyond what we can currently do. Instead, in the meantime, we can use these systems to assist human workers who then offer and correct their responses. That’s much more feasible. When they interact with others, people tend to express the same intent with different words, potentially over several sentences with different word orders. Talking to chatbots can sometimes be challenging — current chatbot solutions don’t allow diversity. Therefore, you’d better format your dialogue in order to be understood. This is frustrating.
Any company either startup or enterprise, who wants to take advantage of AI, needs to ensure that they have actual useful data to start with. Where some companies might suffice with simple log data that is generated by their application or website, a company that wants to be able to use AI to enhance their business/products/services, should ensure that the data that they are collecting is the right type of data. Dependent on the industry and business you are in, the right type of data can be log data, transactional data, either numerical or categorical, it is up to the person working with the data to decide what that needs to be. Besides collecting the right data, another big step is ensuring that the data that you work with is correct. Meaning that the data is an actual representative of what happened. If I want a count of all the Payment Transactions, I need to know what is the definition of a Transaction, is it an Initiated Transaction or a Processed Transaction? Once I have answered that question and ensured that the organization agrees on it, can I use it to work with.
Think of how the sharing economy has exploded in the past decade. If you’ve taken an Uber to the airport or rented an Airbnb, you’ve been a part of it. We’re even at a point where renting out personal items is a viable business model. For example, Omni Storage stores items you’re not using — just like a normal storage company — but they also rent your items out to people. Skis, guitar, winter jacket. It’s all available for rent (with the owner’s permission) via an app. We all hold onto certain possessions, because we plan to use them eventually. Or so we tell ourselves. Why not make some money off of our stuff instead of letting it go unused? That question is at the heart of the sharing economy, and we’re going to be hearing a lot more about businesses like Omni in the next few years. This is what it can look like if blockchain is involved. Futuristic sharing concepts will only work if many other considerations are taken care of. Each item has to be documented, proven authentic, assigned a current value, and even insured. And blockchain can be extremely useful here.
In our experience, AI can be a huge help to the leader who’s trying to become more inwardly agile and foster creative approaches to transformation. When a CEO puts AI to work on the toughest and most complex strategic challenges, he or she must rely on the same set of practices that build personal inner agility. Sending AI out into the mass of complexity, without knowing in advance what it will come back with, the CEO is embracing the discovery of original, unexpected, and breakthrough ideas. This is a way to test and finally move on from long-held beliefs and prejudices about their organization, and to radically reframe the questions in order to find entirely new kinds of solutions. And the best thing about AI solutions is that they can be tested. AI creates its own empirical feedback loop that allows you to think of your company as an experimental science lab for transformation and performance improvement. In other words, the hard science of AI can be just what you need to ask the kind of broad questions that lay the foundation for meaningful progress.
In its blog post, Kaspersky Lab states: "It seems that there's a bot that is searching for vulnerable Cisco switches via the IoT search engine Shodan and exploiting the vulnerability in them (or, perhaps, it might be using Cisco's own utility that is designed to search for vulnerable switches). Once it finds a vulnerable switch, it exploits the Smart Install Client, rewrites the configuration and thus takes another segment of the Internet down. That results in some data centers being unavailable, and that, in turn, results in some popular sites being down." In an advisory on Cisco switch vulnerability issued Monday, the Indian Computer Emergency Response Team stated multiple vulnerabilities have been reported in Cisco IOS XE ,which could be exploited by a remote attacker to send a crafted packet to an affected device and gain full control also conduct denial of service condition.
The data protection officer will also need the right tools in place to monitor irregularities and work with the CISO network team. Real-time analysis at the network level will give businesses an indication of the files or data that have been transferred or viewed from the network environment. This will support any breach reporting and give an organisation the means to handle the reputational aspect of a breach fallout, and rapidly understand what data has been accessed and how to respond. The next key part of the puzzle is for a business to have a slick process for reporting and communicating breaches to the regulator, customers and any other affected parties. Practice is the only way to prepare: define a process, rehearse it in simulations with the required decision makers, refine it, and repeat as the business and regulatory environment shifts, year on year. Complement this with a clear and defined internal procedure so all staff know what to do should and who they need to speak to if they notice something awry.
Traditional MDM has been around since the early 2000’s. As data volume has grown and the potential value of analytics has exploded, enterprises seeking to compete on analytics struggle to scale mastering efforts with the surfeit of available data sources. Clearly, creating robust data engineering pipelines to unify this data at scale is more important -- and harder -- than ever. An “agile” approach, utilizing machine learning can cut time required for unification or analytics projects (around 90%) while scaling to more sources than other traditional approaches. Moreover, given the scale of enterprise data, automation is the key to agility and scale. Such enterprise data automation can only be achieved with some human oversight to make sure the results are fast and accurate. Not just raw data scalability, but also human process scalability is enabled by machine learning.
The public and, more importantly, governmental leaders are loosing patience with companies that fall victim to attacks because they didn’t address known vulnerabilities with available patches and highly publicized exploits. Aside from how dangerous the leaked NSA–developed exploits can be in the hands of cybercriminals, attacks like WannaCry showed us how connected we are. The “ransomworm” spread like wildfire through networks and jumped into new areas through third–party connections. Where there was a path, there was a way. This should be of concern, especially amid the move to the cloud where complexity and visibility challenges only become more daunting. To stay safe in the era of distributed attacks and cloud–first strategies, organisations need to rethink how they view their attack surface. Attackers don’t see your network with distinct boundaries, and neither can you. No matter if it’s your physical, virtual or cloud network — you need to approach security holistically and centralize management.
While technology addiction is a real thing, especially for teenagers, IT pros have their own monkeys on their backs. Whether you're an infrastructure junkie or a Slack head, chasing the data dragon or mesmerized by the blinking lights on your network operations center dashboard, your tech addictions can kill productivity, sap budgets and stall innovation. An inability to relinquish control can lead to technology silos and turf wars. Overdependence on artificial intelligence can actually hurt, not help, your company. And while everyone loves shiny new toys, they may not be the most cost-effective solutions for your organization. The first step on the road to recovery is admitting you have a problem. The next step is reading our prescriptions for how to kick your bad habits and get clean again. "Organizations are caught in analysis paralysis," says Sarah Kampman, vice president of product at Square Root, whose CoEfficient SaaS platform helps retailers and automotive brands make sense of their data. "The information isn't translating into behavioral changes that drive success."
Quote for the day:
"Don’t be so quick to label something as “bad.” It may be the thing that takes you to success." -- Tim Fargo