IT pursues zero-touch automation for application support
Automation is a top goal, from application conception -- or selection, in the case of a third-party business application -- through adoption and use. Executive-level management wants zero-touch automation that controls every application, all the IT resources it runs on and every step of every development and operations process. Zero-touch automation, sometimes called ZTA, covers two specific goals: Sustain an infrastructure that supports applications, databases and workers, and accurately automate application mapping onto IT infrastructure. The former is about analytics and capacity planning, and the latter facilitates terms such as DevOps and orchestration. DevOps, both as technologies and cultural changes that drive faster, better software delivery and operations, predates advances in cloud computing and virtualization. Development teams would build something and turn it over to operations to run, without consideration for the operational deployment requirements.
Nutanix powers Manchester City Council’s IT
The council assessed Nutanix, HPE SimpliVity, HPE Synergy and the VxRail appliance from Dell-EMC and VMware. Farrington says it elected Nutanix running a supermicro appliance because “Nutanix offered the closest to a silver bullet – we could get everything from a single vendor”. In Farrington’s experience, HCI gives the council greater flexibility than traditional IT infrastructure. One benefit is a distributed storage fabric with thin provisioning, which enables the council to make the most of its storage capacity. “We have the ability to scale quickly. The ability to add another storage and compute device quickly is beneficial,” he says. “We also benefit from the deduplication and compression services that are built in.” HCI has also provided a way to bring together the support teams for Windows servers and storage. “I had six teams to look after the datacentre facility,” says Farrington. “Historically, we had two teams – one looked after our 900 Windows servers, the other looked after storage and backup. ...”
Top 10 Features to Look for in Automated Machine Learning
Feature engineering is the process of altering the data to help machine learning algorithms work better, which is often time-consuming and expensive. While some feature engineering requires domain knowledge of the data and business rules, most feature engineering is generic. Look for an automated machine learning platform that can automatically engineer new features from existing numeric, categorical, and text features. You will want a system that knows which algorithms benefit from extra feature engineering and which don’t, and only generates features that make sense given the data characteristics. ... It’s quite standard for machine learning software to train the algorithm on your data. After all, you wouldn’t want to manually do Newton-Raphson iteration would you? Probably not. But, often there’s still the hyperparameter tuning to worry about. Then you want to do feature selection, to improve both the speed and accuracy of a model. Look for an automated machine learning platform that uses smart hyperparameter tuning, not just brute force, and knows the most important hyperparameters to tune for each algorithm.
Machine Learning Widens the Gap Between Knowledge and Understanding
Given how imperfect our knowledge has always been, this assumption has rested upon a deeper one. Our unstated contract with the universe has been that if we work hard enough and think clearly enough, the universe will yield its secrets, for the universe is knowable, and thus, at least, somewhat pliable to our will. But now that our new tools, especially machine learning and the internet, are bringing home to us the immensity of the data and information around us, we’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it. Our newly capacious machines can get closer to understanding it than we can, and they, as machines, don’t really understand anything at all. This, in turn, challenges another assumption we hold one level further down: The universe is knowable to us because we humans (we’ve assumed) are uniquely able to understand how the universe works. At least since the ancient Hebrews, we have thought ourselves to be the creatures uniquely made by God with the capacity to receive His revelation of the truth.
How Azure uses machine learning to predict VM failures
On average, disk errors start showing up between 15 and 16 days before a drive fails, and in the last 7 days before it fails reallocated sectors triple and device resets go up tenfold. Behaviour and failure patterns vary from one drive manufacturer to another, and even between different models of hard drive from the same vendor. The telemetry for training the machine learning system has to be collected from different kinds of workloads, because that affects how quickly the failure is going to happen: if the VM is thrashing the disk, a drive with early signs of failure will fail fairly quickly, whereas the same drive in a server with a less disk-intensive workload could carry on working for weeks or months. Azure has a similar machine-learning system that predicts failures of compute nodes. In both cases, instead of trying to definitively predict whether a specific piece of hardware is failing, the systems rank them in order of how error-prone they are. The top systems on the list stop accepting new VMs and have running VMs live-migrated off onto different nodes, and then get taken out of service for testing.
SQL Server users could already run the database themselves on Google Cloud Platform (GCP) via VMs, but Google will fully manage the upcoming service through its Cloud SQL offering, which already features PostgreSQL and MySQL. Google's managed SQL Server service will support all editions of SQL Server 2017, which also has backward compatibility with older versions of the database, said Dominic Preuss, director of product management for Google Cloud, at the Cloud Next conference here this week. AWS has offered a similar service through its Relational Database Service for years. Moreover, Microsoft has worked since 2009 on its Azure SQL managed service. Microsoft's effort has endured some fits and starts over the years. Customers that wanted to move very large SQL Server databases to the cloud had to run them on Azure's VM-based service or break them apart into multiple pieces, given Azure SQL's size limitations.
How to deal with backup when you switch to hyperconverged infrastructure
Each HCI vendor offers a hardware configuration using components supported by the virtualization vendors it wishes to support. Since the system comes pre-built you can be assured that all the hardware components will work together and will work with any supported hypervisors. Any incompatibilities between the various components will be handled by the HCI vendor. Some HCI vendors also offer their own hypervisors. The best example of this would be Nutanix with their Acropolis hypervisor. Typically such a hypervisor will offer tighter integration with the HCI hardware and integrated data-protection features. Often, the built-in hypervisor is also less expensive than traditional hypervisors, especially if you take advantage of the native data-protection features. The final type of HCI vendor supports neither VMware nor Hyper-V, nor do they use their own hypervisor. Scale Computing uses the KVM hypervisor, which is open source. Like Nutanix, they do this to reduce their customers’ TCO while offering much of the same functionality that VMware offers. In addition, they also offer integrated data protection.
How AIOps Supports a DevOps World
AIOps can also automate workflows for alerts that require escalation, human attention and/or investigation. For example, alerts on devices supporting business-critical IT services require notification of Level 1 support staff within five minutes of alert receipt. If the alert is from a server and for a specific application, an IT or DevOps user will need to create an incident and route it to the relevant application team. AIOps takes care of this immediately with alert escalation workflows that help program first-response actions for notification and incident creation. Again, this can occur completely unsupervised – no human interaction required – once these policies are established. What’s more, policy-driven AIOps correlates dependencies based on downstream resources or establishes an algorithm-based correlation to address groups of alerts continuously. This drastically frees up time that is typically spent sifting through alert floods, figuring out what to do with them, and then doing it. Advanced AIOps tools use native instrumentation to determine how frequently specific alert sequences occur.
Doing continuous testing? Here's why you should use containers
As nearly every software tester has experienced, test environments are a mixed blessing. On one hand, they allow end-to-end tests that would otherwise have to be executed in production. Without a test environment, testing teams would be shipping code that hasn't been tested across functional boundaries out to users—and hoping for the best. A well-configured and maintained test environment, one that closely mimics production and contains up-to-date code deployments, can provide a safe and sane way for testers to validate a scenario before it gets into the hands of a customer. Problematically, however, test environments encourage a mode of development that is fast becoming outdated: long integration cycles, an untrustworthy main source trunk, and late-stage testing. The most productive, highest-performing engineering teams do just the opposite. They need to be able to trust that code in the main trunk could go to production at any time. They often shift left on quality, with the majority of testing happening before a code change even lands.
Kotlin Multiplatform for iOS Developers
KMP works by using Kotlin to program business logic that is common to your app's various platforms. Then, each platform's natively programmed UI calls into that common logic. UI logic must still be programmed natively in many cases because it is too platform-specific to share. In iOS this means importing a .frameworkfile - originally written in KMP - into your Xcode project, just like any other external library. You still need Swift to use KMP on iOS, so KMP is not the end of Swift. KMP can also be introduced iteratively, so you can implement it with no disruption to your current project. It doesn't need to replace existing Swift code. Next time you implement a feature across your app's various platforms, use KMP to write the business logic, deploy it to each platform, and program the UIs natively. For iOS, that means business logic in Kotlin and UI logic in Swift. The close similarities between Swift's and Kotlin's syntax greatly reduces a massive part of the learning curve involved with writing that KMP business logic.
Quote for the day:
"To double your net worth, double your self-worth. Because you will never exceed the height of your self-image." -- Robin Sharma
No comments:
Post a Comment