Showing posts with label NoOps. Show all posts
Showing posts with label NoOps. Show all posts

Daily Tech Digest - June 21, 2022

Effective Software Testing – A Developer’s Guide

When there are decisions depending on multiple conditions (i.e. complex if-statements), it is possible to get decent bug detection without having to test all possible combinations of conditions. Modified condition/decisions coverage (MC/DC) exercises each condition so that it, independently of all the other conditions, affects the outcome of the entire decision. In other words, every possible condition of each parameter must influence the outcome at least once. The author does a good job of showing how this is done with an example. So given that you can check the code coverage, you must decide how rigorous you want to be when covering decision points, and crate test cases for that. The concept of boundary points is useful here. For a loop, it is reasonable to at least test when it executes zero, one and many times. It can seem like it should be enough to just do structural testing, and not bother with specification based testing, since structural testing makes sure all the code is covered. However, this is not true. Analyzing the requirements can lead to more test cases than simply checking coverage. For example, if results are added to a list, a test case adding one element will cover all the code. 


Inconsistent thoughts on database consistency

While linearizability is about a single piece of data, serializability is about multiple pieces of data. More specifically, serializability is about how to treat concurrent transactions on the same underlying pieces of data. The “safest” way to handle this is to line up transactions in the order they were arrived and execute them serially, making sure that one finishes before the next one starts. In reality, this is quite slow, so we often relax this by executing multiple transactions concurrently. However, there are different levels of safety around this concurrent execution, as we’ll discuss below. Consistency models are super interesting, and the Jepsen breakdown is enlightening. If I had to quibble, it’s that I still don’t quite understand the interplay between the two poles of consistency models. Can I choose a lower level of linearizability along with the highest level of serializability? Or does the existence of any level lower than linearizable mean that I’m out of the serializability game altogether? If you understand this, hit me up! Or better yet, write up a better explanation than I ever could :). If you do, let me know so I can link it here.


AI and How It’s Helping Banks to Lower Costs

Using AI helps banks lower the costs of predicting future trends. Instead of hiring financial analysts to analyze data, AI is used to organize and present data that the banks can use. They can get real-time data to analyze behaviors, predict future trends, and understand outcomes. With this, banks can get more data that, in turn, helps them make better predictions. ... Another advantage of using AI in the banking industry is that it reduces human errors. By reducing errors, banks prevent loss of revenue caused by these errors. Moreover, human errors can lead to financial data breaches. When this happens, critical data may get exposed to criminals. They can use the stolen data to use clients’ identities for fraudulent activities. Especially with a high volume of work, employees cannot avoid committing errors. With the help of AI, banks can reduce a variety of errors. ... AI helps banks save money by detecting fraudulent payments. Without AI, banks may lose millions because of criminal activities. But thanks to AI, banks can prevent such losses as the technology can analyze more than one channel of data to detect fraud.


Is NoOps the End of DevOps?

NoOps is not a one-size-fits-all solution. You know that it’s limited to apps that fit into existing serverless and PaaS solutions. Since some enterprises still run on monolithic legacy apps (requiring total rewrites or massive updates to work in a PaaS environment), you’d still need someone to take care of operations even if there’s a single legacy system left behind. In this sense, NoOps is still a way away from handling long-running apps that run specialized processes or production environments with demanding applications. Conversely, operations occurs before production, so, with DevOps, operations work happens before code goes to production. Releases include monitoring, testing, bug fixes, security, and policy checks on every commit, and so on. You must have everyone on the team (including key stakeholders) involved from the beginning to enable fast feedback and ensure automated controls and tasks are effective and correct. Continuous learning and improvement (a pillar of DevOps teams) shouldn’t only happen when things go wrong; instead, members must work together and collaboratively to problem-solve and improve systems and processes.


How IT Can Deliver on the Promise of Cloud

While many newcomers to the cloud assume that hyperscalers will handle most of the security, the truth is they don’t. Public cloud providers such as AWS, Google, and Microsoft Azure publish shared responsibility models that push security of the data, platform, applications, operating system, network and firewall configuration, and server-side encryption, to the customer. That’s a lot you need to oversee with high levels of risk and exposure should things go wrong. Have you set up ransomware protection? Monitored your network environment for ongoing threats? Arranged for security between your workloads and your client environment? Secured sets of connections for remote client access or remote desktop environments? Maintained audit control of open source applications running in your cloud-native or containerized workloads? These are just some of the security challenges IT faces. Security of the cloud itself – the infrastructure and storage – fall to the service providers. But your IT staff must handle just about everything else.


Distributed Caching on Cloud

Caching is a technique to store the state of data outside of the main storage and store it in high-speed memory to improve performance. In a microservices environment, all apps are deployed with their multiple instances across various servers/containers on the hybrid cloud. A single caching source is needed in a multicluster Kubernetes environment on cloud to persist data centrally and replicate it on its own caching cluster. It will serve as a single point of storage to cache data in a distributed environment. ... Distributed caching is now a de-facto requirement for distributed microservices apps in a distributed deployment environment on hybrid cloud. It addresses concerns in important use cases like maintaining user sessions when cookies are disabled on the web browser, improving API query read performance, avoiding operational cost and database hits for the same type of requests, managing secret tokens for authentication and authorization, etc. Distributed cache syncs data on hybrid clouds automatically without any manual operation and always gives the latest data. 


Bridging The Gap Between Open Source Database & Database Business

It is relatively easy to get a group of people that creates a new database management system or new data store. We know this because over the past five decades of computing, the rate of proliferation of tools to provide structure to data has increased, and it looks like at an increasing rate at that. Thanks in no small part to the innovation by the hyperscalers and cloud builders as well as academics who just plain like mucking around in the guts of a database to prove a point. But it is another thing entirely to take an open source database or data store project and turn it into a business that can provide enterprise-grade fit and finish and support a much wider variety of use cases and customer types and sizes. This is hard work, and it takes a lot of people, focus, money – and luck. This is the task that Dipti Borkar, Steven Mih, and David Simmen took on when they launched Ahana two years ago to commercialize the PrestoDB variant of the Presto distributed SQL engine created by Facebook, and no coincidentally, it is a similar task that the original creators of Presto have taken on with the PrestoSQL, now called Trinio, variant of Presto that is commercialized by their company, called Starburst.


Data gravity: What is it and how to manage it

Examples of data gravity include applications and datasets moving to be closer to a central data store, which could be on-premise or co-located. This makes best use of existing bandwidth and reduces latency. But it also begins to limit flexibility, and can make it harder to scale to deal with new datasets or adopt new applications. Data gravity occurs in the cloud, too. As cloud data stores increase in size, analytics and other applications move towards them. This takes advantage of the cloud’s ability to scale quickly, and minimises performance problems. But it perpetuates the data gravity issue. Cloud storage egress fees are often high and the more data an organisation stores, the more expensive it is to move it, to the point where it can be uneconomical to move between platforms. McCrory refers to this as “artificial” data gravity, caused by cloud services’ financial models, rather than by technology. Forrester points out that new sources and applications, including machine learning/artificial intelligence (AI), edge devices or the internet of things (IoT), risk creating their own data gravity, especially if organisations fail to plan for data growth.


CIOs Must Streamline IT to Focus on Agility

“Streamlining IT for agility is critical to business, and there’s not only external pressure to do so, but also internal pressure,” says Stanley Huang, co-founder and CTO at Moxo. “This is because streamlining IT plays a strategic role in the overall business operations from C-level executives to every employee's daily efforts.” He says that the streamlining of business processes is the best and most efficient way to reflect business status and driving power for each departmental planning. From an external standpoint, there is pressure to streamline IT because it also impacts the customer experience. “A connected and fully aligned cross-team interface is essential to serve the customer and make a consistent end user experience,” he adds. For business opportunities pertaining to task allocation and tracking, streamlining IT can help align internal departments into one overall business picture and enable employees to perform their jobs at a higher level. “When the IT system owns the source of data for business opportunities and every team’s involvement, cross team alignment can be streamlined and made without back-and-forth communications,” Huang says.


Open Source Software Security Begins to Mature

Despite the importance of identifying vulnerabilities in dependencies, most security-mature companies — those with OSS security policies — rely on industry vulnerability advisories (60%), automated monitoring of packages for bugs (60%), and notifications from package maintainers (49%), according to the survey. Automated monitoring represents the most significant gap between security-mature firms and those firms without a policy, with only 38% of companies that do not have a policy using some sort of automated monitoring, compared with the 60% of mature firms. Companies should add an OSS security policy if they don't have one, as a way to harden their development security, says Snyk's Jarvis. Even a lightweight policy is a good start, he says. "There is a correlation between having a policy and the sentiment of stating that development is somewhat secure," he says. "We think having a policy in place is a reasonable starting point for security maturity, as it indicates the organization is aware of the potential issues and has started that journey."



Quote for the day:

"No great manager or leader ever fell from heaven, its learned not inherited." -- Tom Northup

Daily Tech Digest - December 13, 2018

AI and investing: The artificial intelligence analytical revolution

AI and investing: The artificial intelligence analytical revolution image
In the next five years, investment management will go through an analytical revolution, AI and investing will come together and revolutionise the way that investment information is analysed, packaged and presented to investors. This will change the face of investment management, with professional investors able to make informed investment decisions faster and will for the first time give private investors access to the same advanced stock selection and portfolio construction tools as the professionals. At the heart of this revolution is Augmented Intelligence, harnessing the power of AI combined with human decision making. As Paul Tudor Jones famously said, “No human is better than a machine, but no machine is better than a human with a machine”. ... By bringing out “interesting” insights, whether to confirm or enhance a suspected salient point or by identifying one that might have been overlooked otherwise, AI is the humble ‘idiot-savant’ that can usefully take on the tedious data-intensive work that humans are not best suited for.



A radical new neural network design could overcome big challenges in AI

The layer approach has served the AI field well—but it also has a drawback. If you want to model anything that transforms continuously over time, you also have to chunk it up into discrete steps. In practice, if we returned to the health example, that would mean grouping your medical records into finite periods like years or months. You could see how this would be inexact. If you went to the doctor on January 11 and again on November 16, the data from both visits would be grouped together under the same year. So the best way to model reality as close as possible is to add more layers to increase the granularity. (Why not break your records up into days or even hours? You could have gone to the doctor twice in one day!) Taken to the extreme, this means the best neural network for this job would have an infinite number of layers to model infinitesimal step-changes. The question is whether this idea is even practical.


DevOps adoption is creating chaos in the enterprise

With DevOps a nearly universal concept in the modern enterprise, it stands to reason that there are going to be issues. If so, the numbers in OverOps' report indicate there's more than just a margin of implementation error at work: Something is going wrong in a lot of DevOps organizations. Take automation, for example: DevOps is designed for faster release schedules, which means that automated tools are used to catch an increasing percentage of software errors. Despite increased use of automation, 76.6% of respondents said they're still using manual processes, and a shocking 52.2% rely on customers to tell them about errors. All that manual troubleshooting takes time, with 20% of respondents saying they spend one full workday a week fixing bugs, and another 42% spend between one half and one full day. Think back to the shared responsibility that developers and operations feel under DevOps, and you can start to see where OverOps' report is going: The big problem in DevOps is confusion.


Computers could soon run cold, no heat generated

Computers could soon run cold, no heat generated
The new “exotic, ultrathin material” is a topological transistor. That means the material has unique tunable properties, the group, which includes scientists from Monash University in Australia, explains. It’s superconductor-like, they say, but unlike super-conductors, doesn’t need to be chilled. Superconductivity, found in some materials, is partly where electrical resistance becomes eliminated through extreme cooling. “Packing more transistors into smaller devices is pushing toward the physical limits. Ultra-low energy topological electronics are a potential answer to the increasing challenge of energy wasted in modern computing,” the Berkeley Lab article says. ... Another group of researchers from the University of Konstanz in Germany say supercomputers will be built without waste heat. That group is working on the transportation of electrons without heat production and is approaching it through a form of superconductivity.


Managing risk in machine learning


As we deploy ML in many real-world contexts, optimizing statistical or business metics alone will not suffice. ... Given the growing interest in data privacy among users and regulators, there is a lot of interest in tools that will enable you to build ML models while protecting data privacy. These tools rely on building blocks, and we are beginning to see working systems that combine many of these building blocks. ... Because there’s no ironclad procedure, you will need a team of humans-in-the-loop. Notions of fairness are not only domain and context sensitive, but as researchers from UC Berkeley recently pointed out, there is a temporal dimension as well (“We advocate for a view toward long-term outcomes in the discussion of ‘fair’ machine learning”). What is needed are data scientists who can interrogate the data and understand the underlying distributions, working alongside domain experts who can evaluate models holistically.


When a NoOps implementation is -- and when it isn't -- the right choice


"Basically, NoOps is the same thing as no pilots or no doctors," Davis said. "We need to have pathways to use the systems and software that we create. Those systems and software are created by humans -- who are invaluable -- but they will make mistakes. We need people to be responsible for gauging what's happening." Human fallibility has driven the move to scripting and automation in IT organizations for decades. Companies should strive to have as little human error as possible, but also recognize that humans are still vital for success. Comprehensive integration of AI into IT operations tools is still several years away, and even then, AI will rely on human interaction to operate with the precision expected. Davis likens the situation to the ongoing drive for autonomous cars: They only work if you eliminate all the other drivers on the road.


Microsoft is telling awesome open source stories

Open source isn't just about code. Or needn't be. The spirit of open source is collaboration and sharing, which Microsoft has recently kicked up a notch with a new series of blogs that show how company culture can change, and what it could mean for open source development. Microsoft is already the world's biggest contributor to open source, at least as measured by the number of employees contributing to open source projects. It doesn't need to tell tales, and yet that's exactly what the company is doing, to cool effect, with its new Microsoft Open Source Stories blog. The blog aims to share the behind-the-scenes stories about how certain projects went open source. As Microsoft's Dmitry Lyalin related to Microsoft watcher Paul Thurrott, "We hope to tell over 20 stories through this process as we have had a lot of great stuff hidden behind the firewall."


Social engineering at the heart of critical infrastructure attack


Analysis reveals that the malware moves in several steps. The initial attack vector is a document that contains a weaponised macro to download the next stage, which runs in memory and gathers intelligence. The victim’s data is sent to a control server for monitoring by the actors, who then determine the next steps.  The researchers said it was still unclear whether the attacks they observed were a first-stage reconnaissance operation with more to come. “We will continue to monitor this campaign and will report further when we or others in the security industry receive more information,” they said. Raj Samani, chief scientist and fellow at McAfee, said Operation Sharpshooter was yet another example of a sophisticated, targeted attack being used to gain intelligence for malicious actors. 


Merging Internet Of Things And Blockchain In Preparation For The Future


Companies, users of IoT and Blockchain, as well as prominent figures in these futuristic technologies are all starting to come around to the idea that the Fourth Industrial Revolution will not just be built on one, but rather an amalgamation of all of them in different facets. If IoT has issues with security and corruption, it makes sense that Blockchain come to its aid and help secure the data with its immutable ledger. At the same time, if AI is struggling with its recording of data and a record of the AI, a distributed ledger can help with that too. So, as AI and IoT, for example, gain an edge in their previous issues with the integration of blockchain, so does the blockchain become more ingrained and useful going forward, making it indispensable in some sectors. Adoption like this always has been the hope for the distributed ledger technology. It is probably time for blockchain builders and implementers to stop worrying about disrupting current and past sectors with the use of a single blockchain, and instead look to how they can use blockchain in alliance with IoT, Big Data, AI, and others. to build that Fourth Industrial Revolution.


Top 10 Tech Predictions for 2019

Image: Pixabay
Some predictions are easy. For example, it’s a good bet that popular buzzwords like digital transformation, cloud computing, artificial intelligence and quantum computing will continue to get a lot of attention in the news. What is less clear is exactly how these areas of technology might evolve. Which innovations will become an integral part of doing business and which will fade in importance? How will enterprises attempt to leverage these technologies for competitive advantage? And what should IT leaders be doing now to prepare for the near future? ... The analyst predictions, on the other hand, could be useful to CIOs and other IT leaders who are writing goals, setting budgets and deciding on training priorities for the coming year. In many cases, the analysts have offered direct advice to enterprise IT on how to capitalize on these trends. Often the various research firms agree with each other in regards to which steps enterprises should take. But in other cases, cybersecurity being one, the analysts had wildly divergent ideas on how trends are likely to impact enterprises and what leaders should do about it to prepare.



Quote for the day:


"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford