"For folks that have applications that are linked to the hardware environment, it's very different for them to get off it. So we'll work with clients especially when they go beyond the end-of-service life," O'Grady said. "It's mainly government, discrete manufacturing, and banking." The equipment is stored at a former DEC manufacturing facility in Salem, NH. Three former DEC-hands work there as technicians. When customers send in hardware for repair, re-homing, or recycling, "They have a little fun to see if their technician ID is on that machine," O'Grady joked. "Any equipment that's demand-constrained, or supply-constrained in the market, we'll keep it here... I don't think there's anything we haven't been able to find for clients," he said, adding that sometimes he works with museums and related organizations for assistance. "The VAX 6000, these are 30-year-old machines. We've got more than several clients that we're helping out long-term. Everyone does not have enough budget dollars to go around to innovate in new technology," so they focus on stabilizing what already works, he explained. "As long as they can keep the hardware environment viable then it works for them."
IT organizations can't simply inject an AIOps tool into their monitoring and management roster and expect positive results. Instead, they need to prep IT workflows and infrastructure for an AI-driven strategy. "The first place that IT leaders start their AI journey tends to be process automation," said Chirag Dekate, a Gartner analyst. IT automation itself doesn't equal AIOps, but it propels organizations in the right direction, as it eliminates menial and repetitive tasks for IT staff. First ensure existing IT automation scripts function as they should, Dekate said. Streamlined data management and collection is another prerequisite for AI in IT operations, according to Ari Silverman, director of platform automation and enterprise architecture at OCC, an equity derivatives clearing organization in Chicago. Silverman's team uses LogicMonitor as an AIOps monitoring tool, primarily for predictive analytics and automated capacity planning and management.
Netgear has ditched the towers in their latest iteration of the standard Orbi mesh system. The updated system is rectangular, with waves on top that cleverly hide circulation vents to keep the devices cool. It's a solid system, able to cover up to 6,000 square feet with 1.2Gbps of wireless goodness (if you choose a 4-pack). The only bad thing is that, you guessed it, the app is a bit of a mess. Once you get it setup and running, it's a solid system, but getting there can be an exercise in patience. The base system won't see the satellites, it takes forever for setup steps to complete. In a market with "instant" networks, it's a major gaff. It's a good thing, then that the Orbi system is less expensive than most mesh systems. If you have a large area of space to cover, this is the cheapest way to do it. Where the Orbi WiFi 6 system dominated via networking power, the RBK13 wins by being the cheapest way to get mesh networking into your home. You can get a router and two satellite system for under $200.
Over the past few years, enterprise leaders have become captivated by the idea of digital transformation. Perhaps that shouldn't be surprising given all the hype from analysts and vendors. These days it’s tough to find an enterprise technology product that doesn't advertise itself as a key ingredient in digital transformation. And expert analysis is full of promises that sometimes seem too good to be true. ... Using hardware, software, algorithms, and the Internet, it's 10 times cheaper and faster to engage customers, create offerings, harness partners, and operate your business." That kind of promise is certainly enticing. But it's tough to find agreement on what exactly "digital transformation" means. For some organizations, it just means getting into ecommerce. For others, it involves doing away with paper-based processes and becoming more efficient. Still others are embracing cloud computing, DevOps, automation, the Internet of Things (IoT), and artificial intelligence (AI) to become more competitive. And many seem to be doing most of this and more.
One of the latest “big things” in fintech is the growth of the mobile payments industry. Consumers want payments to be instant, invisible, and free (IIF). Mobile payment innovations might even do away with our traditional wallets as global consumers are less reliant on cash. Google, Apple, Tencent, and Alibaba already have their own payment platforms and continue to roll out new features such as biometric access control, inducing fingerprint, and face recognition. One of the most popular payment methods in China and used by hundreds of millions of users every day is WeChat Pay. Alibaba’s Alipay, a third-party online and mobile payment platform, is now the world’s largest mobile payment platform. Many mobile payment platforms are building programs and offers based on the user’s purchase history. While many financial institutions are continuing to adopt new technology to enhance operations and improve customer service, these five trends will provide exciting avenues for innovation. Financial institutions realize they must learn how to use fintech to their competitive advantage.
We can distinguish at least three types of technical debt. Even if developers don’t compromise on quality and try to build future-proof code, the debts can arise involuntarily. This can be provoked by the constant changes in the requirements or the development of the system. Your design turned out to be flawed and you can’t add new features quickly and easily but it wasn’t your fault or decision. In this case, we’re talking about accidental or unavoidable tech debt. The second type of technical debt is deliberate debt that appears as a result of a well-considered decision. Even if the team understands that there is a right way to write the code and the fast way to write the code it may go with the second one. Often. it makes sense – as in the case with startups aimed at delivering their products to market very quickly to outpace their competitors. Finally, the third type of tech debt refers to situations when developers didn’t have enough skills or experience to follow the best specific practice that leads to really bad code. The bad code can also appear when developers didn’t take enough time and effort to understand the system they are working with, miss things or vice versa perform too many changes.
Not many cloud predictions matter to the general populace, but this one about the power of AI affects everyone. In 2020, explainable AI will rise in prominence for cloud-based AI services -- particularly as enterprises face pushback around the ethical issues of AI. Explainable AI is a technology that provides justification for the decision that it reaches. Both Google and Microsoft have launched explainable AI initiatives, currently in early stages. Amazon is likely to introduce some explainable AI capabilities as part of its AI tools. Through the power of deep learning, data scientists can build models to predict things and make decisions. But this trend can result in black-box algorithms that are difficult for humans to make sense of. The biggest challenge enterprises face is the need to track bias in AI models and identify cases where models lose accuracy.
The issue for me is that the ML groups I’ve mentioned are perhaps limiting. Consider a dynamic combining of all types, with adjusting the approach, type, or algorithm during the processing of the training data, either mass loads or transactions. At issue is use cases that don’t really fit these three categories. For example, we have some labeled data and unlabeled data, and we’re looking for the ML engine to identify both the data itself and patterns in the data. Most of us don’t have perfect training data, and it would be nice if the ML engine itself could sort things out for us. With a few exceptions, we have to pick supervised or unsupervised learning and only solve a portion of the problem, and we may not have the training data needed to make it useful. Moreover, we lack the ability to provide reinforcement learning as the data is used within transactional applications, such as identifying a fraudulent transaction ongoing. There are ways to create an “all of the above” approach, but it entails some pretty heavy-duty work for both the training data and the algorithms.
AI and machine learning have powered these innovations and many of the AI advancements came about thanks to open source projects such as TensorFlow and PyTorch, which launched in 2015 and 2016, respectively. In the next decade, Ferris stressed the importance of not just making AI smarter and more accessible, but also more trustworthy. This will ensure that AI systems make decisions in a fair manner, aren't vulnerable to tampering, and can be explained, he said. Open source is the key for building this trust into AI. Projects like the Adversarial Robustness 360 Toolkit, AI Fairness 360 Open Source Toolkit, and AI Explainability 360 Open Source Toolkit were created to ensure that trust is built into these systems from the beginning, he said. Expect to see these projects and others from the Linux Foundation AI — such as the ONNX project — drive the significant innovation related to trusted AI in the future. The Linux Foundation AI provides a vendor-neutral interchange format for deep learning and machine learning.
What’s interesting is how the HIPAA Security Rule also governs the physical aspect of ePHI and healthcare information systems. Not many information security standards go as deep as HIPAA when it comes to maintaining the physical security of information. The physical facility used to store ePHI needs to have sufficient security measures. Only authorized personnel are allowed access to the hardware and terminals connected to the healthcare information systems. Unauthorized access is considered a serious violation of the HIPAA standard. Logging is also a part of the physical safeguard. Access to terminals and servers must be logged in detail to prevent unauthorized access and allow for an easy audit of the secure facility. Logging on a physical level helps the entire system remain safe. There is also the need for secure devices and terminals, including secure tablets that are now used by medical personnel. It is up to the healthcare service providers to maintain a secure network across their facilities. To complete the equation, policies for hardware disposal and the termination of a healthcare information system must also be put in place.
Quote for the day:
-"Leaders think and talk about the solutions. Followers think and talk about the problems." -- Brian Tracy