Showing posts with label nanoservices. Show all posts
Showing posts with label nanoservices. Show all posts

Daily Tech Digest - September 18, 2021

10 Steps to Simplify Your DevSecOps

Automation is key when balancing security integrations with speed and scale. DevOps adoption already focuses on automation, and the same holds true for DevSecOp. Automating security tools and processes ensures teams are following DevSecOps best practices. Automation ensures that tools and processes are used in a consistent, repeatable and reliable manner. It’s important to identify which security activities and processes can be completely automated and which require some manual intervention. For example, running a SAST tool in a pipeline can be automated entirely; however, threat modeling and penetration testing require manual efforts. The same is true for processes. A successful automation strategy also depends on the tools and technology used. One important automation consideration is whether a tool has enough interfaces to allow its integration with other subsystems. For example, to enable developers to do IDE scans, look for a SAST tool that has support for common IDE software. Similarly, to integrate a tool into a pipeline, review whether he tool offers APIs or webhooks or CLI interfaces that can be used to trigger scans and request reports.


Next-Generation Layer-One Blockchain Protocols Remove the Financial Barriers to DeFi & NFTs

The rapidly expanding world of DeFi is singlehandedly reshaping the global financial infrastructure as all manner of stocks, securities and transferable assets are slowly but surely being tokenized and stored in digital wallets. New protocols are arising daily that allow anyone with an internet connection or smartphone to access ecosystems that are equivalent to digital savings accounts but offer much more attractive yields than those found in the traditional banking sector. Unfortunately, with most of the top DeFi protocols currently operating on the Ethereum blockchain, the high cost of conducting transactions on the network has priced out ordinary individuals living in countries where even a $5 transaction fee is a significant amount of money capable of feeding a family for a week. This is where competing new blockchain platforms have the biggest opportunity for growth and adoption thanks to cross-chain bridges, a growing number of opportunities to earn a yield on new DeFi protocols and significantly smaller transaction cost.


The Dance Between Compute & Network In The Datacenter

In an ideal world, there is a balance between compute, network, and storage that allows for the CPUs to be fed with data such that they do not waste too much of their processing capacity spinning empty clocks. System architects try to get as close as they can to the ideals, which shift depending on the nature of the compute, the workload itself, and the interconnects across compute elements — which are increasingly hybrid in nature. We can learn some generalities from the market at large, of course, which show what people do as opposed to what they might do in a more ideal world than the one we all inhabit. We tried to do this in the wake of Ethernet switch and router stats and server stats for the second quarter being released by the box counters at IDC. We covered the server report last week, noting the rise of the single-socket server, and now we turn to the Ethernet market and drill down into the datacenter portion of it that we care about greatly and make some interesting correlations between compute and network.


ZippyDB: The Architecture of Facebook’s Strongly Consistent Key-Value Store

A ZippyDB deployment (named "tier") consists of compute and storage resources spread across several regions worldwide. Each deployment hosts multiple use cases in a multi-tenant fashion. ZippyDB splits the data belonging to a use case into shards. Depending on the configuration, it replicates each shard across multiple regions for fault tolerance, using either Paxos or async replication. A subset of replicas per shard is part of a quorum group, where data is synchronously replicated to provide high durability and availability in case of failures. The remaining replicas, if any, are configured as followers using asynchronous replication. Followers allow applications to have many in-region replicas to support low-latency reads with relaxed consistency while keeping the quorum size small for lower write latency. This flexibility in replica role configuration within a shard allows applications to balance durability, write performance, and read performance depending on their needs. ZippyDB provides configurable consistency and durability levels to applications, specified as options in read and write APIs. For writes, ZippyDB persists the data on a majority of replicas' by default. 


CISA, FBI: State-Backed APTs May Be Exploiting Critical Zoho Bug

The FBI, CISA and the U.S. Coast Guard Cyber Command (CGCYBER) warned today that state-backed advanced persistent threat (APT) actors are likely among those who’ve been actively exploiting a newly identified bug in a Zoho single sign-on and password management tool since early last month. At issue is a critical authentication bypass vulnerability in Zoho ManageEngine ADSelfService Plus platform that can lead to remote code execution (RCE) and thus open the corporate doors to attackers who can run amok, with free rein across users’ Active Directory (AD) and cloud accounts. The Zoho ManageEngine ADSelfService Plus is a self-service password management and single sign-on (SSO) platform for AD and cloud apps, meaning that any cyberattacker able to take control of the platform would have multiple pivot points into both mission-critical apps (and their sensitive data) and other parts of the corporate network via AD. It is, in other words, a powerful, highly privileged application which can act as a convenient point-of-entry to areas deep inside an enterprise’s footprint, for both users and attackers alike.


Algorithmic Thinking for Data Science

Generalizing the definition and implementation of an algorithm is algorithmic thinking. What this means is, if we have a standard of approaching a problem, say a sorting problem, in situations where the problem statement changes, we would not have to completely modify the approach. There will always be a starting point to attack the new problem set. That’s what algorithmic thinking does: it gives a starting point. ... Why is the calculation of time and space complexities important, now more than ever? It has to do with something we discussed earlier – the amount of data getting processed today. To explain this better, let us walk through a few examples that will showcase the importance of large amounts of data in algorithm building. The algorithms that we casually create for problem-solving in a classroom are very different from what the industry requires when the amount of data being processed is more than a million times what we deal with, in test scenarios. And time complexities are always seen in action when the input size is significantly larger.


Forget Microservices: A NIC-CPU Co-Design For The Nanoservices Era

Large applications hosted at the hyperscalers and cloud builders — search engines, recommendation engines, and online transaction processing applications are but three good examples — communicate using remote procedure calls, or RPCs. The RPCs in modern applications fan out across these massively distributed systems, and finishing a bit of work often means waiting for the last bit of data to be manipulated or retrieved. As we have explained many times before, the tail latency of massively distributed applications is often the determining factor in the overall latency in the application. And that is why the hyperscalers are always trying to get predictable, consistent latency across all communication across a network of systems rather than trying to drive the lowest possible average latency and letting tail latencies wander all over the place. The nanoPU research set out, says Ibanez, to answer this question: What would it take to absolutely minimize RPC median and tail latency as well as software processing overheads?


RESTful Applications in An Event-Driven Architecture

There are many use cases where REST is just the ideal way to build your applications/microservices. However, increasingly, there is more and more demand for applications to become real-time. If your application is customer-facing, then you know customers are demanding a more responsive, real-time service. You simply cannot afford to not process data in real-time anymore. Batch processing (in many modern cases) will simply not be sufficient. RESTful services, inherently, are polling-based. This means they constantly poll for data as opposed to being event-based where they are executed/triggered based on an event. RESTful services are akin to the kid on a road trip continuously asking you “are we there yet?”, “are we there yet?”, “are we there yet?”, and just when you thought the kid had gotten some sense and would stop bothering you, he asks again “are we there yet?”. Additionally, RESTful services communicate synchronously as opposed to asynchronously. What does that mean? A synchronous call is blocking, which means your application cannot do anything but wait for the response.


Application Security Tools Are Not up to the Job of API Security

With the advent of a microservice-based API-centric architecture, it is possible to test each of the individual APIs as they are developed rather than requiring a complete instance of an application — enabling a “shift left” approach allowing early testing of individual components. Because APIs are specified earliest in the SDLC and have a defined contract (via an OpenAPI / Swagger specification) they are ideally suited to a preemptive “shift left” security testing approach — the API specification and underlying implementation can be tested in a developer IDE as a standalone activity. Core to this approach is API-specific test tooling as contextual awareness of the API contract is required. The existing SAST/DAST tools will be largely unsuitable in this application — in the discussion on DAST testing to detect BOLA we identified the inability of the DAST tool to understand the API behavior. By specifying the API behavior with a contract the correct behavior can be enforced and verified enabling a positive security model as opposed to a black list approach such as DAST.


Microservice Architecture – Introduction, Challeneges & Best Practices

In a microservice architecture, we break down an application into smaller services. Each of these services fulfills a specific purpose or meets a specific business need for example customer payment management, sending emails, and notifications. In this article, we will discuss the microservice architecture in detail, we will discuss the benefits of using it, and how to start with the microservice architecture. In simple words, it’s a method of software development, where we break down an application into small, independent, and loosely coupled services. These services are developed, deployed, and maintained by a small team of developers. These services have a separate codebase that is managed by a small team of developers. These services are not dependent on each other, so if a team needs to update an existing service, it can be done without rebuilding and redeploying the entire application. Using well-defined APIs, these services can communicate with each other. The internal implementation of these services doesn’t get exposed to each other.



Quote for the day:

"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson

Daily Tech Digest - May 31, 2021

How The World Is Updating Legislation in the Face Of Persistent AI Advances

Recently, 13 states across the US placed a ban on the use of facial recognition technology by the police. Interestingly, 12 of these 13 cities were democrat-elect, implying the cultural difference within a country itself. The European Union is the gold standard when we talk about data privacy and laws governing the various aspects of technology. To protect individuals’ rights and freedom the article 22 of the GDPR, “Automated individual decision making, including profiling,” has ensured the availability of manual intervention in automated decision making in cases where individual’s rights and freedoms are affected. The first paragraph, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her,” and the third paragraph, “the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision,” provides the right for manual intervention to individuals.


Why ML Capabilities Of GCP Is Way Ahead Of AWS & Azure

TPUs are Google’s custom-developed application-specific integrated circuits (ASICs) to accelerate ML workloads. A big advantage for GCP is Google’s strong commitment to AI and ML. “The models that used to take weeks to train on GPU or any other hardware can put out in hours with TPU. AWS and Azure do have AI services, but to date, AWS and Azure have nothing to match the performance of the Google TPU,” said Jeevan Pandey, CTO, TelioLabs. ... Google cloud’s open-source contributions, especially in tools like Kubernetes –a portable, extensible, open-source platform for managing containerized workloads and services, facilitating declarative configuration and automation– have worked to their advantage. ... Google cloud’s speech and translate APIs are much more widely used than their counterparts. According to Gartner’s 2021 Magic Quadrant, Google cloud has been named the leader for Cloud AI services. Pre-trained ML models can be instantly used to classify objects in an image into millions of predefined categories. Additionally, one of the top ML services from Google cloud is Vision AI, powered by AutoML.


How Robotic Processing Automation can improve healthcare at scale

RPA isn’t just a boon for patient-facing organizations—healthcare vendors are getting in on the action, too. For example, the company I work for faced the daunting challenge of transferring over 1 million pieces of patient data from one EMR to another. As any medical professional can attest, switching EMRs is a notoriously time-consuming process. So, we invested in RPA to bring efficiency to an otherwise manual and laborious task. In the end, we saved valuable time—and a significant chunk of change. ... One of the biggest contributors to burnout is the ever-increasing administrative work stemming from non-clinical tasks like documentation, insurance authorizations, and scheduling—all things that can be done faster and more accurately with RPA. And when providers are freed from the monotony, they have more time to focus on the parts of the job that they really enjoy. This, in turn, boosts morale and productivity, thus enhancing care delivery and optimizing patient outcomes overall. For those working in health care, the demand for digital solutions like RPA feels like the dawning of the new era—albeit one that is met with mixed emotions.


The many lies about reducing complexity part 2: Cloud

Managers in IT are sensitive to it, as complexity generally is their biggest headache. Hence, in IT, people are in a perennial fight to make the complexity bearable. One method that has been popular for decades has been standardisation and rationalisation of the digital tools we use, a basic “let’s minimise the number of applications we use”. This was actually part 1 of this story: A tale of application rationalisation (not). That story from 2015 explains how many rationalisation efforts were partly lies. (And while we’re at it: enjoy this Dilbert cartoon that is referenced therein.) Most of the time multiple applications were replaced by a single platform (in short: a platform is software that can run other software) and the applications had to be ‘rewritten’ to work ‘inside’ that platform. So you ended up with one extra platform, the same number of applications and generally a few new extra ways of ‘programming’, specific for that platform. That doesn’t mean it is all lies. The new platform is generally dedicated to a certain type of application, which makes programming these applications simpler. But the situation is not as simple as the platform vendors argue. 


Implementing Nanoservices in ASP.NET Core

There is no precise definition of how big or small a microservice should be. Although microservice architecture can address a monolith's shortcomings, each microservice might grow large over a while. Microservice architecture is not suitable for applications of all types. Without proper planning, microservices can grow as large and cumbersome as the monolith they are meant to replace. A nanoservice is a small, self-contained, deployable, testable, and reusable component that breaks down a microservice into smaller pieces. A nanoservice, on the other hand, does not necessarily reflect an entire business function. Since they are smaller than microservices, different teams can work on multiple services at a given point in time. A nanoservice should perform one task only and expose it through an API endpoint. If you need your nanoservices to do more work for you, link them with other nanoservices. Nanoservices are not a replacement for microservices - they complement the shortcomings of microservices.


Five Data Governance Trends for Digital-Driven Business Outcomes in 2021

Knowledge of data-in-context, data processes, best techniques to provision, as well as tools enabling these methods of self-service are crucial to democratize data. However, with technology advancements, including virtualization, self-service discovery catalogs, and data delivery mechanisms, the internal data consumers can shop and provision for data in shorter cycles. In 2020, it took organizations anywhere between a week to three weeks to provision complex data that includes integration from multiple sources. Also, an increase in data awareness will help data consumers explore further available dark data that can provide predictive insights to create new user-stories that can propel customer journeys. ... A lack of focus is common across organizations as they assume Data Governance as an extension of either compliance or a risk function. Data Literacy will, in fact, change the attitude of business owners towards having to actively manage and govern data. There are immediate and cumulative benefits from actively governing data either by defining data or fixing bad quality data. But there is a need for a value-realization framework to actively manage the benefits of Data Management services.


Best practices for securing the CPaaS technology stack

Certifications are certainly important to consider when evaluating options, but even so, certifications don’t mean security. It is a best practice to check on the maturity of these vendor-specific certifications, as some companies go through a process of self-certification that doesn’t necessarily ensure the level of security your organization needs. Sending a thoughtful questionnaire to multiple vendors can be helpful for scoring these vendor’s security, offering a holistic and specific viewpoint to be considered by an organization’s IT team. On the customer end, in-house security and engineering staff can prep for CPaaS implementation by becoming familiar with the use of APIs and the authentication methods, communications protocols and the data that flows to and from them. Hackers routinely perform reconnaissance to find unprotected APIs and exploit them. Once CPaaS is incorporated into the hybrid work model technology stack, it is a best practice for an organization to focus its sights on its endpoint management. The use of a centralized endpoint management system that pushes patches for BIOS, operating systems, and applications is necessary for protecting the cloud network and customer data once a laptop connects.


3 SASE Misconceptions to Consider

Solution architecture is important, and yes, you want to minimize the number of daisy chains to reduce complexity. However, it doesn't mean you cannot have any daisy chains in your solution. In fact, dictating zero daisy chains can have consequences — not for performance, but for security. SASE consolidates a wide array of security technologies into one service, yet each of those technologies is a standalone segment today — with its own industry leaders and laggards. Any buyer who dictates "no daisy chains" is trusting that one single SASE provider can (all by itself) build the best technologies across a constellation of capabilities that is only growing larger. Being beholden to one company is not pragmatic given that the occasional daisy chain greatly increases the ability to unite best-of-breed technologies under one service provider's umbrella. ... SASE revolves around the cloud and is undoubtedly about speed and agility achieved through cloud-deployed security. But SASE doesn't mean the cloud is the only way to go and you should ignore everything else. Instead, IT leaders must take a more practical position, using the best technology given the situation and problem.


Advice for Someone Moving From SRE to Backend Engineering

The work you’re doing as an SRE will partly depend on your company culture. Without a doubt, some organizations will relegate their SREs to driving existing processes like watching the on-call make sure there are no tickets, running deployments, etc. This can make folks feel like they aren’t progressing. However, today there are a lot more things you can do as an SRE than you once could. You used to just have Bash. Now you have many automation opportunities that will hone your programming skills. You can configure Kubernetes and Terraform. There's a bunch of code-oriented tools that you can use. You can orchestrate your stuff in Python. You could also use something Shoreline if you want it, which is “programming for operations,” and allows you to think of the world in terms of control loops, and how you can automate there. DevOps has also increased the Venn diagram overlap between SRE and Backend engineering. Previously, it was engineers using version control and engineers using package managers, which was separate from SREs using deployment systems and SREs using Linux administration tools.


Inspect & Adapt – Digging into Our Foundations of Agility

When we need to change we usually feel a resistance against it. Take the current pandemic for instance. The simple action of wearing a facemask in public has caused indisputable resistance in many of us. Cognitively we understand that there is a benefit to doing so, even if there were long discussions on exactly how beneficial it would be. But emotionally it did not come natural and easy to most. Do you remember how it felt the first time you wore a facemask when entering the supermarket? It was not very pleasant, was it? But even when we are the driver for change we might find resistance against it. New year’s resolutions come to mind again. The majority of new year's resolutions are abandoned come February, even though the desired results have not been achieved. In other words, the resistance to change might sometimes show up late to the party. What might be missing here is endurance and resilience to small throw backs. I believe that we need a thorough understanding in which situation we currently are. This sounds simple and easy. And on a mid-level it is. "We need to come out of the pandemic with a net positive", a director of a company might say.



Quote for the day:

"It's very important in a leadership role not to place your ego at the foreground and not to judge everything in relationship to how your ego is fed." -- Ruth J. Simmons