Daily Tech Digest - November 11, 2021

Apache Pulsar: A Unified Queueing and Streaming Platform

While streaming systems like Apache Kafka were capable of scaling — with a lot of manual effort around data rebalancing — the capabilities of streaming API were not always the right fit. It required developers to work around limitations of a pure streaming model while also requiring developers to learn a new way of thinking and designing, which made adoption for messaging use cases more difficult. With Pulsar, the situation is different. Developers can use a familiar API that works in a familiar way while offering more scalability and the capabilities of a streaming system. The need for scalable messaging plus streaming messaging is a challenge my team at Instructure faced. In the effort to solve this problem, we discovered Pulsar. At Instructure, we were dealing with high-scale situations where we needed higher scale messaging. Initially, we tried to build this by re-architecting around streaming tech. Then, we found that Apache Pulsar was the perfect fit to help teams get the capabilities they needed but without the complexity of re-architecting around a streaming-based model.


Thriving in the Complexity of Software Development Using Open Sociotechnical Systems Design

How can companies then make headway in such a turbulent environment? And how can agile and the pipe-dream of a true product team come true? Let us once more take a look at the theoretical underpinnings of STSD, especially the OST extension developed by Fred Emery during the Norwegian Industrial Democracy Program in 1967. He identified what he referred to as the two genotypical organisational design principles, simply called DP1 and DP2 (DP – design principle) and they were defined by the way organisations get redundancy in order to operate efficiently. DP1 is where there is redundancy of parts, i.e. each part is so simple that it can easily and cheaply be replaced. The simpler, the better, but that also means that they need to be coordinated or supervised in order to complete a whole task. This is what we all know as the classical bureaucratic hierarchy with maximum division of labour. The critical feature of DP1 is that responsibility for coordination and control is located at least one level above where the action is being performed.


Learn how Microsoft strengthens IoT and OT security with Zero Trust

The practice of adopting multiple tools to monitor different tiers of suppliers increases complexity, which in turn increases the odds that a cyberattack can produce a significant return for your adversary. Siloes can create additional problems—different teams have different priorities, which may lead to different risk priorities and practices. This inconsistency can create a duplication of efforts and gaps in risk analysis. Suppliers’ personnel also are a top concern. Organizations want to know who has access to their data; so they can protect themselves from human liability, shadow IT, and other insider threats. For supplier risk management, an always-on, automated, integrated approach is needed, but current processes aren’t well-suited to the task. To secure your supply chain, it’s important to have a repeatable process that will scale as your organization innovates. ... With the prevalence of cloud connectivity, IoT and OT have become another part of your network. And because IoT and OT devices are typically deployed in diverse environments—from inside factories or office buildings to remote worksites or critical infrastructure—they’re exposed in ways that can make them easy targets.


Humanizing hackers: Entering the minds of those behind the attacks

Developers are invariably specialists for only the front-end, API development, or databases. Their ability to perceive the entire system as one whole is somewhat challenged by their role in the organization and by their limitations of systemic understanding. Typically, developers identify a problem and look for the simplest and fastest solution possible (patch-by-patch formula) without having the full context. A developer’s primary focus is on user experience or the quality of the application. If the immediate customer is satisfied, not through security but by delivering functionality, the company is unconcerned. Furthermore, developers are not always trained on security and compliance, and security officers have little input on protocol or policy. Security teams only retroactively review applications and ecosystem security when systems are already in production – by that time, it is already too late. What should be ingrained into the company DNA has become an after-the-fact consideration. If you have an infinite number of holes on a boat, it will eventually sink – that’s why companies are becoming obvious targets for hackers.


Dridex Banking Malware Turns Up in Mexico

Metabase Q noticed three Dridex campaigns in Mexico starting in April of this year, writes José Zorrilla of Metabase Q's Offensive Security Team, Ocelot, in a blog post. The hosting and distribution point for Dridex was the website of Odette Carolina Lastra García, who is a representative for the Green Party - Partido Verde Ecologista de México - in the Congress of the state of Tabasco. Her website may have been vulnerable to compromise and was used to pass on Dridex, Zorrilla writes. The site was suspended around mid-October. Zorrilla writes that there were three observed campaigns. In April, phishing emails with Dridex were sent around the world that lead to a version of Dridex placed on Lastra García's website. In August, deceptive SMS messages made the rounds that purported to come from the bank Citibanamex. Those messages contained a link that redirected to Lastra García's infected website. There was also a third ruse using the SocGholish framework. SocGholish uses several types of social engineering frameworks to try to entice people to download a bogus software update, which is actually a remote access Trojan.


How leaders can help teams fight fatigue: 7 practical tips

Perhaps the most novel idea is to get ahead of it, name it and point to it as a thing that people should be on the lookout for – managers in particular – and encourage the known efforts of prevention before it becomes a problem that needs to be solved. More importantly, encourage everyone to look out for each other, because caring for others is proven to help stave off your own burnout. We coach people to look for the signs, not necessarily to ask about them – it can be hard for people to self-diagnose burnout, but easier for others to observe changes in their behaviors. We look for cynicism, dissatisfaction, lack of motivation, irritability, impatience, and tiredness. We ask questions about how they've experienced changes in their feeling, thinking, and behaving and even if they're aware of what the known signs of burnout are. When the earliest signs are spotted, we want to get ahead of it with overt encouragement. We use team-wide Slack conversations to demonstrate and celebrate self-care in order to remove the stigma and the sense that it needs to be offline or undisclosed use of time.


How an as-a-service model lends itself to achieving climate change goals

More importantly, a circular economy also requires a shift in the way we do business, from purchasing and installing huge amounts of equipment that are underutilised to an as-needed model. Adopting a “product-as-a-service” business model is one of the most impactful changes we can make today. It not only makes our economy more circular by breaking established patterns of mismatched supply and demand; it also has the potential to generate significant growth opportunities for any industry. As-a-service is a radical departure from a commoditised business model whereby companies sell a product and consider their job done. Instead, the producer retains ownership of – and responsibility for – the product throughout its entire life cycle. The customer has full use of the product for as long as is needed, paying only for when it is actually used, instead of for the product itself or its upkeep. The producer, in turn, is responsible for building a quality product that lasts, and is energy and material efficient. It is also their role to take the product back and prepare it (or its components) for reuse.


NFT is enough: why digital art is much more than copy and paste

NFTs aren’t just a buyers’ game though, far from it. The creation of digital artwork and the sale of it using blockchain technology has opened up a whole new marketplace for budding artists and is blurring geographical boundaries. While artists creating physical work can often be tied to their local markets or one specific place displaying pieces in galleries, NFTs and the internet enables those producing digital pieces to have a global audience at their fingertips. The role blockchain plays in providing ownership and authenticity is vital for these artists too. Without a large following already, many artists including those from remote parts of the world, may struggle to prove their credibility. Whether or not the art itself is appealing, art lovers are unlikely to purchase an item if they don’t have concrete proof of the authenticity of the piece. NFTs essentially level the playing field and create opportunities for millions of artists to get their pieces recognised worldwide. While reputation will still ultimately be a contributing factor, as one would expect, it enables artists to let the artwork speak for itself.


The New Enterprise Risk Management Strategy

As more applications, systems and infrastructure are now designed and built in a highly distributed and always available manner, they are highly resilient, fault tolerant, elastic and scalable - in the cloud and/or on-premises. This help addresses the availability aspect of the C.I.A. triad. Because the applications, systems, and infrastructure are created to be immutable, small changes are detected very easily. This removes the need to maintain integrity. Integrity problems occur when we have the ability to make changes, either intentionally or unintentionally, that are very hard to detect. That affected the integrity aspect of the C.I.A. triad. ... According to Rinehart, the co-founder and CTO of Verica, Security Chaos Engineering is a way to approach security differently. The idea is to test the resiliency of the security controls continuously and automatically in the face of chaos - or simulated real-life events on real production systems in a controlled manner - without affecting other systems. This helps security practitioners build confidence and learn about and improve the resiliency and effectiveness of those controls over time.


How to manage endpoint security in a hybrid work environment

The issue with remote working is that lots of employees leave their work devices at the office – or don’t have any at all – and end up using their own personal electronics for work purposes. Often, these are insecure and put business data at risk. Moore says employees can access secure office-based machines while working from home through virtual desktop infrastructure (VDI), but they still need to use their own computer to do this. He warns: “Endpoint security may not be the first thought on employees’ minds, which can cause issues when data is transferred to these devices not owned by the company. Even with regulations drawn up, employees are able to transfer data relatively easily.” Hybrid working can also exacerbate the risk of illicit data transfers by people within an organisation. “This can be where the employee is in the early stages of exiting a company and considering taking company information with them,” says Moore. “Furthermore, there is a threat of the employee who wants to damage the company by stealing sensitive data, which is made much harder to police when remote working.”



Quote for the day:

"It is the capacity to develop and improve their skills that distinguishes leaders from followers." -- Warren G. Bennis

Daily Tech Digest - November 10, 2021

All your serverless are belong to us

Though serverless has been enabled by the clouds, serverless functions aren’t simply a big cloud game. As Vercel CEO (and Next.js founder) Guillermo Rauch details in the Datadog report, “Two years ago, Next.js introduced first-class support for serverless functions, which helps power dynamic server-side rendering (SSR) and API routes. Since then, we’ve seen incredible growth in serverless adoption among Vercel users, with invocations going from 262 million a month to 7.4 billion a month, a 28x increase.” From such examples, and many others (including ever shorter function invocation times, which indicate that enterprises are becoming more proficient with functions), it’s clear that serverless computing has taken off. Vendors will continue to press the “no lock-in” marketing button, but customers don’t seem to care. Rather, they may care about lock-in, but they care much more about accelerating their time to customer value. In enterprise computing, as in life, there are always trade-offs. The cost of a perfectly lock-in-free existence is lowest-common-denominator code that is generic across hardware/cloud platforms.


5 Things To Remember When Upgrading Your Legacy Solution

In opposition to a common idea, a legacy framework does not necessarily be old. The most negative part of these frameworks is that they are still employed, even despite the fact that they frequently fail to meet critical demands and support core business operations as they are meant to. So, let’s face it - when there is legacy software you cannot replace, you should at least go for modernization. "Legacy systems are not that safe," says Daniela Sawyer, Founder and Business Development Strategist of FindPeopleFast.net. "This happens because, being the older technology, they are not usually supported by the company or the vendor who created it in the first place. Also, it lacks having regular updates and patches to maintain the pace with the modern world. So the new update should ensure the security aspect precisely." Although it might appear as though you're saving costs when you don't spend money updating your digital product, that might cost you much more over the long haul. 


The role of visibility and analytics in zero trust architectures

There are three NIST architecture approaches for ZTA that have network visibility implications. The first is using enhanced identity governance, which (for example) means using identity of users to only allow access to specific resources once verified. The second is using micro-segmentation, e.g., when dividing cloud or data center assets or workloads, segmenting that traffic from others to contain but also prevent lateral movement. And finally, using network infrastructure and software defined perimeters, such as zero trust network access (ZTNA) which for example allows remote workers to connect to only specific resources. NIST also describes monitoring of ZTA deployments. Outlining that network performance monitoring will need security capabilities for visibility. This includes that traffic should be inspected and logged on the network (and analyzed to identify and reach to potential attacks), including asset logs, network traffic and resource access actions. Furthermore, NIST expresses concern about the inability to access all relevant and encrypted traffic – which may originate from non-enterprise-owned assets or applications and/or services that are resistant to passive monitoring.


How business and IT can overcome the data governance challenge

Business heads and their teams, after all, are the ones who have the knowledge about the data – what it is, what it means, who and what processes use it and why. As well as what rules and policies should apply to it. Without their perspective and participation in data governance, the enterprise’s ability to intelligently lockdown risks and enable growth will be seriously compromised. However, with their engagement, sustainable payback will be achieved and the case for continuing commitment by the enterprise to data governance will be easier to justify. It is vital, however, that modern data governance is a strategic initiative. A data governance strategy is the foundation upon which to build a muscular data-driven organization. Appropriately implemented – with business data stakeholders driving alignment between data governance and strategic enterprise goals and IT handling the technical mechanics of data management – the door opens to trusting data and using it effectively. Data definitions can be reconciled and understood across business divisions, knowledge base quality can be guaranteed, and security and compliance do not have to be sacrificed even as information accessibility expands. 


Data Advantage Matrix: A New Way to Think About Data Strategy

For SAAS companies, the funnel is everything. Optimizing metrics at every stage of the funnel is what accelerates SAAS companies from average to exponential growth. So for any SAAS founder, if you don’t have basic operational analytics set up on day one, you’re probably doing something wrong. This fictional SAAS startup would start at the top left of the matrix with basic operational analytics. These analytics don’t have to be complicated. At Stage 1, it’s all about getting the basics right — measuring the number of leads per day, users converting on the site, users signing up on the product, free trials that end up paying, etc. Given the importance of operational analytics, it would make sense for this startup to move to Stage 2 pretty quickly — converting its basic analytics into something more scalable like a centralized intelligence engine. This would include investing in a data warehouse that brings all data into one place, adding a BI tool, and hiring the first analysts to drive data-driven decisions where it matters most.


Fintech has a gender diversity problem — here’s how we tackle it

Diversity can both empower individuals and spark feelings of inclusion across society. It encourages different perspectives and promotes tolerance and understanding amongst workplaces. And in business, quite rightly, the topic has entered the mainstream. Take the engineering industry as an example, where gender equality figures are showing an encouraging steady upwards trajectory. In law, female representation across the world is also reputable. Both show gender equality is slowly, but surely, moving in the right direction, but sadly in financial technology (Fintech), the same is yet to be realised. A report by Innovate Finance found women still account for less than 30% of the Fintech workforce, with less than 20% in executive positions. By 2026, the industry is estimated to grow by 20%, hitting the $324 billion mark in value, meaning that the gender gap will soon widen even further. But how can Fintech continue to progress and thrive if it isn’t a desirable industry for all? Clearly, the industry needs to do more. The question is, how?


The IT Talent Crisis: 2 Ways to Hire and Retain

While CompTIA’s research notes that money is the top reason for workers to leave, another factor is the lack of opportunity. “Our research indicates that a top reason tech workers consider leaving is a lack of career growth opportunities, a telling message to employers not to underestimate the value of investing in staff training and professional development,” said Tim Herbert, executive VP for research and market intelligence at CompTIA, in a press release. Investing in employee training during a labor crunch can also have downsides if employees take advantage of training and then use those added skills to parlay their way into a new opportunity elsewhere. But if employees are leaving, they are also going somewhere, too. Pyle recommends that organizations not only look carefully at their compensation package offers but also consider casting a wider net for candidates by looking outside of your usual geography. “The hybrid work environment works,” he says. “People can work from anywhere. If we are bringing in the right talent we can bring them in from anywhere, as long as they can do the job.”


CIO role: How to move from gatekeeper to advisor

Historically, the role of the CIO focused on identifying, implementing, and maintaining business IT systems, with budget set aside to explore and drive innovation within the organization. Moving into the 2000s and 2010s, CIOs were tasked with spearheading digital transformation and the journey to the cloud. Today’s CIO must work as an advisor and partner to departments across the organization, seeking to understand the needs of the wider business and ensuring that those needs are met in a way that works both for the individual and the wider business aims. However, this distributed approach brings challenges: How, for example, does IT respond to a vulnerability in a piece of software that IT did not know was running? Many IT leaders tell us that the barrier between shadow IT and business-led IT is becoming more and more blurred and is forcing tradeoffs around risk vs. flexibility. It is IT’s role to not only facilitate business needs but also ensure compliance and security. This creates tension between offering advice vs. imposing governance.


Hackers Disrupt Canadian Healthcare and Steal Medical Data

The attack has resulted in ongoing disruptions to care in addition to exposed data. The province is comprised of four regional health authorities, although data was not stolen from all of them: Western Health - no data believed to have been stolen, Central Health - data exposure unclear, Eastern Health - 14 years of data exposed, and Labrador-Grenfell - 9 years of data exposed. Officials say they're attempting to restore systems from backups, and that the process remains underway and is not yet complete. On Thursday, for example, public broadcaster CBC reported that while the Health Sciences Center hospital in the city of St. John's had restored its Meditech system, which handles patient health information and financial details, it only included information from before the attack. Each health authority has been publishing its own updates on the ongoing disruptions it continues to face. Through at least Wednesday, for example, Western Health noted that only some appointments would be proceeding, including chemotherapy appointments "at a reduced capacity."


The Renaissance of Code Documentation: Introducing Code Walkthrough

As inline comments describe the specific code area they are attached to without a broader scope, they are always limited. As for high-level documentation, they can indeed provide the big picture, but they lack the details that developers need for their work. For example, in the documentation about extending git’s source code, you can definitely describe something like the general process of creating a new git command in a high-level document. However, you won’t be able to do so effectively without getting into specific details and giving examples from the code itself. ... Code-Walkthrough Documentation takes the reader on a “walk” made up of at least two stations within the code. They describe flows and interactions and they may rely on incorporating code snippets or tokens to do so. In other words, they are code-coupled, in accordance with the principles of Continuous Documentation. This kind of document provides an experience similar to getting familiarized with a codebase with the help of an experienced contributor to the codebase - when the latter walks you through the code.



Quote for the day:

"You can't be a leader if you can't influence others to act." -- Dale E. Zand

Daily Tech Digest - November 09, 2021

Bias still dominates the discussion of AI adoption in business

Organisations are at last beginning to take ethical standpoints on machine learning and its role in automated decision-making. According to HBR, companies (including Google, Microsoft, BMW and Deutsche Telekom) are creating internal AI policies, making commitments to fairness, safety, privacy and diversity. Organisations must recognise machine learning as a predictive technology that requires the application of judgement—a key part of any such policy—ensuring interpretability and, consequently, trust. While it might be hard to remove bias from your data entirely, you can effectively minimise the effects of that bias by applying a layer of systemised judgement. This turns predictions into decisions that can be trusted. To achieve this you need technology that can efficiently and transparently automate that governance process. New platforms enable firms to apply machine-learnt predictions safely by incorporating a layer of automated human judgement into their systems.


Failing Fast: The Impact of Bias When Speeding Up Application Security

You have a tools bias if you're spending thousands of dollars on tools and systems to integrate them into your development lifecycle. Not every tool needs to cost you a lot of money. There's a great deal of amazing, free open source tools out there. Not everyone needs to be spending that much money. Do you have tools purchased but not properly implemented into your build pipeline? Maybe they were put in and then they were removed because they were causing you pain. Or maybe you got them and you put them in a learning mode, but you never got them fully installed. That's a tool bias. You've spent the time and focus because the tool will solve the problem, but we've not actually solved the problem. We got halfway there and stopped. If there's no plan for maintaining, tuning, or configuring tools post-purchase, also known as a sales-person driven development style, then you've got a tools bias. Your tool has not made you more secure. Your tool has given you the feeling of security, but without the actual action.


Why regulation of tech platforms is the new game changer for strategy

Regulation is proving pivotal in conflicts created when traditional firms compete with or participate in ecosystems dominated by big tech. How many of the profit opportunities created by new regulation will be gobbled up by big tech, and how much of that profit can be internalized by their partners? For instance, regulators are asking, Is it appropriate for a dominant ecosystem orchestrator like Apple to forbid content providers from accessing customers and demanding payments directly? And, given the modest effort Apple put into setting up its App Store, is its 30% cut from every app sold there a fair practice or a blatant abuse of dominant position? Epic Games’ recent lawsuit against Apple (which centered around how people pay for the Fortnite game) sailed bravely into these unchartered waters; the judge ultimately ordered Apple to reverse some, if not all, of its practices. Consider also the drama currently playing out in digital advertising. Big tech firms, supported by their ecosystem partners, have helped spawn a successful industry focused on understanding the profile of individual customers and offering them tailored advertising. 


Why are we still asking KBA questions to authenticate identity?

The federal government has long acknowledged the risks presented by KBAs and the NIST’s own guidelines expressly disavows KBA for digital applications: “The ease with which an attacker can discover the answers to many KBA questions, and the relatively small number of possible choices for many of them, cause KBA to have an unacceptably high risk of successful use by an attacker.” Meanwhile a study by Google found that only 47% of people could remember what they put down as their favorite food a year earlier – and that hackers were able to guess the food nearly 20 percent of the time, with Americans’ most common answer (of course) being pizza. And even when a user does remember the correct answer to one of these questions, they sometimes forget the precise form of their answer, all of which leads to a frustrating customer experience. Protracted verification times inevitably lead to customer abandonment of transactions such as opening a new account, resulting in delayed or lost business. Unsurprisingly, the longer it takes to verify a customer’s identity, the more likely it is they will abandon the process entirely.


Thousands Now Find Success with OKRs, Why Aren't You?

Objectives and Key Results (OKRs) is a flexible tool that helps people, organizations achieve their goals by erecting specific and measurable actions. It also helps them communicate and monitor progress towards them. Objectives can either be short and inspirational. It defines the goal you want to achieve. For companies, they are capable of creating three to five high-level objectives per quarter of the year. This helps them increase their brand awareness and these objectives are meant to be ambitious. Choosing the right objective for your goal can be a challenging aspect of this practice but when it's done correctly, you can tell if you have reached your objective. Key Results helps you deliver each set of objectives perfectly, so you can be able to measure your progress in achieving your goals. ... OKRs are a flexible framework, and because of this, you can set and phrase OKRs in different ways. Think of it as the pillar of your strategy for the next period. To come up with good OKRs, I will advise that connect them to your day-to-day activities.


How can we eliminate gender bias in tech?

There’s clear evidence of professional prejudice against working mothers — women are passed up for job progression and prevented from exploring other opportunities. This is called the ‘motherhood penalty’. On average, women lose 4% of hourly earnings when they start a family; a significant amount when taken as a proposition of lifetime earnings. Compared to men who gain an average pay rise of 6% after becoming fathers. Moving forward, employers must make clear to female staff that they will be judged purely on performance, not on their working schedules – opening the door to more flexible working options, letting women advance professionally without jeopardising family commitments. Likewise, the stigma around shared parental leave must be addressed, normalising a man’s role as equal caregiver when tending to a new-born. With more equitable paternity policies, female staff will be better enabled to pursue senior leadership roles.


The Crypto Industry Isn’t Too Thrilled About Biden’s Big Policy Moves

Despite some heavy lobbying by crypto lobbyists back in August to clarify the definition of “broker” as it applies to digital assets, the proposed bill passed the Senate without any amendments. The bill was introduced and voted through the Senate within a week in August. While the bill was awaiting House approval, I spoke to some crypto tax lawyers in the U.S. about how things might play out if it is signed into law without amendments. Nathan Giesselman, a partner at Skadden, Arps, Slate, Meagher & Flom LLP, told me that, as it is written in the bill, the provision runs the risk of capturing folks like miners and developers who don’t have the same customer information that a traditional broker might have, putting them in the awkward position of not being able to comply with the required reporting. Now that the House has passed the bill, it’s clear that much will depend on how the U.S. Treasury Department interprets the definition of broker.

 

Six AI and Big Data Trends in Banking for 2022

Big data and AI requires intense computing horsepower, so banks and credit unions are increasingly turning to the cloud to host data and applications. Not only is the cloud able to scale to handle high computing demands, but does it cost effectively. IDC states that global spending on cloud services — including hardware and software — will surpass $1.3 trillion by 2025, growing at a CAGR of 16.9%. Both shared (pubic) cloud and dedicated (private) cloud are slated to grow, says IDC, with private cloud growing at a faster rate. Since bank legacy systems weren’t designed for distributed computing environments, moving them to the cloud is challenging. However, banks and credit unions are softening up to the idea of moving legacy systems not just to the cloud but transforming them to cloud-native platforms, although few have made the leap to a fully cloud-based environment. JPMorgan Chase and Arvest Bank, have both announced that they will switch portions of their core systems to a cloud-native platform. 


The cyber insurance dilemma: The risks of a safety net

Every company owner should be aware of what they are looking for when it comes to cyber insurance. They should always read the fine print and understand the specifics of coverage, deductibles, and exclusions. This safety net can be highly effective if the policy is correctly written and the business is fully aware of its coverage. According to Dan Burke, the Vice President at Woodruff Sawyer (a national insurance provider), cyber insurance typically doesn’t cover three types of losses: potential future lost profits, loss of value due to the theft of intellectual property, and betterment (i.e., the cost to improve internal technology systems after the attack, such as IT upgrades after a cyber event). That said, losses other than the initial ransom are not likely to be covered by insurance. Today, most ransomware attacks do not stop at the initial breach. Take the SolarWinds incident as an example: instead of locking SolarWind’s IT systems, attackers planted malicious code into the company’s Orion technology platform, which is used by more than thirty thousand customers, including the U.S.


Why organisations need to take charge of Office 365 backup and recovery

No matter the size of your organisation, if you’ve automated the backup process for your Office 365 environments, then you’ve taken a big first step to protect your data and ensure its quick recovery. Keep in mind that access to regularly backed up files significantly improves the chances of recovering from a system outage or malware attack. Find a solution that will let you effortlessly pinpoint SaaS data and records. Organisations need to be able to perform targeted restores, preserve critical data sets, and manage production and sandbox environments with ease. Some of this will come down to granular search and restore, but it’s also a good idea to implement point-in-time and version-level recovery tools and immediate restores. Staying secure means that it’s easier to stay compliant. Look for a solution that offers stringent standards, privacy protocols, and zero-trust access controls — this could also include isolated, air-gapped backups from source data, built-in GDPR compliance, and encrypted data when at rest or in-flight. Multi-layering your security also means you can add role-based, SSO, SAML authentication controls too.



Quote for the day:

"Curiosity is the thing that sparks a step into an adventure." -- Annie Lennox

Daily Tech Digest - November 08, 2021

A New Quantum Computing Method Is 2,500 Percent More Efficient

Today, most quantum computers can only handle the simplest and shortest algorithms, since they're so wildly error-prone. And in recent algorithmic benchmarking experiments executed by the U.S. Quantum Economic Development Consortium, the errors observed in hardware systems during tests were so serious that the computers gave outputs statistically indiscernible from random chance. That's not something you want from your computer. But by employing specialized software to alter the building blocks of quantum algorithms, which are called "quantum logic gates," the company Q-CTRL discovered a way to reduce the computational errors by an unprecedented level, according to the release. The new results were obtained via several IBM quantum computers, and they also showed that the new quantum logic gates were more than 400 times more efficient in stopping computational errors than any methods seen before. It's difficult to overstate how much this simplifies the procedure for users to experience vastly improved performance on quantum devices.


Design Patterns for Machine Learning Pipelines

Design patterns for ML pipelines have evolved several times in the past decade. These changes are usually driven by imbalances between memory and CPU performance. They are also distinct from traditional data processing pipelines (something like map reduce) as they need to support the execution of long-running, stateful tasks associated with deep learning. As growth in dataset sizes outpace memory availability, we have seen more ETL pipelines designed with distributed training and distributed storage as first-class principles. Not only can these pipelines train models in a parallel fashion using multiple accelerators, but they can also replace traditional distributed file systems with cloud object stores. Along with our partners from the AI Infrastructure Alliance, we at Activeloop are actively building tools to help researchers train arbitrarily large models over arbitrarily large datasets, like the open-source dataset format for AI, for instance. ... Even though the problem of transfer speed remained, this design pattern is widely considered as the most feasible technique for working with petascale datasets.


Daily Standup Meetings are useless

Standup are not about technical details, even though some technical context can help to frame the arisen complexity and allow PjM and Leads to take the necessary steps to enable you achiving your tasks ( additional meetings, extending the deadline, reestimating the task within the sprint, set up pair programming sessions etc). According to the official docs the Daily Scrum is a 15-minute event for the Developers of the Scrum Team. The purpose of the Daily Scrum is to inspect progress toward the Sprint Goal and produce an actionable plan for the next day of work. This creates focus and improves self-management. Daily Scrums improve communications, identify impediments, promote quick decision-making, and consequently eliminate the need for other meetings. Honestly I am not so sure about the last point, due to the short time allocated to it, it can indeed generate other meetings, follow-ups between Tech Lead and (some of ) the developers, or between the team and the stakeholders, or among developers which decide to tackle an issue with pair programming. 


When Is the Waterfall Methodology Better Than Agile?

Is waterfall ever better? The short answer is yes. Waterfall is more efficient, more streamlined, and faster when it comes to specific types of projects like these: Generally speaking, the smaller the project, the better suited it is to waterfall development. If you’re only working with a few hundred lines of code or if the scope of the project is limited, there’s no reason to take the continuous phased approach; Low priority projects – those with minimal impact – don’t need much outside attention or group coordination. They can easily be planned and knocked out with a waterfall methodology; One of the best advantages of agile development is that your clients get to be an active part of the development process. But if you don’t have any clients, that advantage disappears. If you’re working internally, there are fewer voices and opinions to worry about – which means waterfall might be a better fit; Similarly, if the project has few stakeholders, waterfall can work better than agile. If you’re working with a council of managers or an entire team of decision makers, agile is almost a prerequisite. 


To Secure DevOps, Security Teams Must be Agile

Focusing on a pipeline using infrastructure-as-code allows security teams to build in static analysis tools to catch vulnerabilities early, dynamic analysis tools to catch issues in staging and production, and policy enforcement tools to continuously validate that the infrastructure is compliant, Leitersdorf said. "If you think about how security can be done now, instead of doing security at the tail end of the process ... you can now do security from the beginning through every step in the process all the way to the end. Most security issues will be caught very early on, and then a handful of them will be caught in the live environment and then remediated very quickly," he said. Developers get to retain their speed of development and deployment of applications and, at the same time, reduce the time to remediate security issues. And security teams get to collaborate more closely with DevOps teams, he said. "From a security team perspective, you feel better, you feel more confident, you have guardrails around your developers to reduce the chance of making mistakes along the way and building insecure infrastructure and you now have visibility into their DevOps process, a huge bonus," Leitersdorf said.


How to Avoid Vulnerabilities in Your Code

In general, the "minor problems" are precisely those that result in the most extensive security disasters, and we can say that failures usually present two gaps: We encountered flaws such as insecure code design, injection, and configuration issues through a code vulnerability intentionally or unintentionally: within the TOP 10 demonstrated by OWASP; Through the vulnerability of operations: among the most common problems, we can mention the choice for "weak passwords", default or even the lack of password. A second common failure is the mismanagement of people's permission to a document or system. These types of problems, unfortunately, are pretty standard. Not by chance, 75% of Redis servers have issues of this type. In an analogy, we can say that security flaws are like the case of the Titanic. Considered one of the biggest wrecks, most people are unaware that the ship had a "small problem": the lack of a simple key that could have opened the compartment with binoculars and other devices to help the crew visualize the iceberg in time and prevent a collision.


Taking Threat Detection and Response to the Next Level with Open XDR

XDR fundamentally brings all the anchor tenants that are required to detect and respond to threats into a simple, seamless user experience for analysts that automates repetitive work. Bringing together all the required context enables analysts to take action quickly, without getting lost in a myriad of use cases, different screens and workflows and search languages. It can also help security analysts respond quickly without creating endless playbooks to cover every possible scenario. XDR unifies insights from endpoint detection and response (EDR), network data and security analytics logs and events as well as other solutions, such as cloud workload and data protection solutions, to provide a complete picture of potential threats. XDR incorporates automation for root cause analysis and recommended response, which is critical in order to respond quickly with confidence across a complex IT and security infrastructure. Whether your primary challenge is the complexity of tools, data and workflows or preventing a ransomware actor from laterally moving across your environment, quickly detecting and containing threats is of the essence.


Why Your Code Needs Abstraction Layers

By creating your abstraction in one layer, everything related to it is centralized so any changes can be made in one place. Centralization is related to the “Don’t repeat yourself” (DRY) principle, which can be easily misunderstood. DRY is not only about the duplication of code, but also of knowledge. Sometimes it’s fine for two different entities to have the same code duplicated because this achieves isolation and allows for the future evolution of those entities separately. ... By creating the abstraction layer, you expose a specific piece of functionality and hide implementation details. Now code can interact directly with your interface and avoid dealing with irrelevant implementation details. This improves the code readability and reduces the cognitive load on the developers reading the code. Why Because policy is less complex than its details, so interacting with it is more straightforward. ... Abstraction layers are great for testing, as you get the ability to replace details with another set of details, which helps isolate the areas that are being tested and properly create test doubles.


How to get more women in tech and retain them

When it comes to the demand side, although much progress has been made in terms of escalating diversity up the corporate agenda, there still needs to be a seismic shift in employment practices. Specifically, there needs to be a sweeping change in mindset amongst hiring managers, who still often hire based on ‘cultural fit’, as well as a new approach to how companies help women to break the glass ceiling, through mentoring, career development support and family friendly policies. In the case of recruitment, hiring for cultural fit tends to favour the status quo in a company, whether that relates to race, gender, age, socioeconomic level and so on. That makes it harder for anyone who doesn’t ‘fit the mould’ to get into sectors where they are currently under-represented. Instead, hiring managers ought to look towards integrating ‘value fit’ in their hiring process. The values of a business are the things that drive the way they work and can include everything from teamwork and collaboration, to problem-solving and customer focus. Yes, you need to ensure your employees don’t clash with one another, but a value-fit approach will often lead to this outcome too.


How to Reduce Burnout in IT Security Teams

When our work and personal life interact or become a blur between what is personal life and what is work life, this is when we have to look at our setup. Working from home allows employees to have more flexibility and focus on their work. It also cuts down on commute hours. However, some of us are still trying to separate our work life from our personal life throughout the pandemic. Think about it. We take calls in our kitchen or bedroom at times. Kitchen and bedroom are personal life spaces, not work life spaces. Having a separate space for work helps a lot for balancing. But not everyone has this privilege; as a consequence, the blurriness of work and personal life can impact us, and burnout can creep in. However, if you section a part of the room as a work space, and only use that particular spot for work, it does help. Lastly, no matter what, have work boundaries set, such as turning off work equipment at 6PM during the week and off the whole weekend, and keep those boundaries in place.



Quote for the day:

"The actions of a responsible executive are contagious." - Joe D. Batton

Daily Tech Digest - November 02, 2021

Complexity is killing software developers

“There is more to this profession than writing code; that is the means to an end,” Hightower said. “Maybe we are saying we have built enough and can pause on building new things, to mature what we have and go back to our respective roles of consuming technology. Maybe this is the happy ending of the devops and collaboration movement we have seen over the past decade.” The market is responding to this complexity with an ever-growing list of opinionated services, managed options, frameworks, libraries, and platforms to help developers contend with the complexity of their environment. “No vendor is or will be in a position to provide every necessary piece, of course. Even AWS, with the most diverse application portfolio and historically unprecedented release cadence, can’t meet every developer need and can’t own every relevant developer community,” O’Grady wrote in a 2020 blog post. That being said, “there is ample evidence to suggest that we’re drifting away from sending buyers and developers alike out into a maze of aisles, burdening them with the task of picking primitives and assembling from scratch.


Securing SaaS Apps — CASB vs. SSPM

There is often confusion between Cloud Access Security Brokers (CASB) and SaaS Security Posture Management (SSPM) solutions, as both are designed to address security issues within SaaS applications. CASBs protect sensitive data by implementing multiple security policy enforcements to safeguard critical data. For identifying and classifying sensitive information, like Personally Identifiable Information (PII), Intellectual Property (IP), and business records, CASBs definitely help. However, as the number of SaaS apps increase, the amount of misconfigurations and possible exposure widens and cannot be mitigated by CASBs. These solutions act as a link between users and cloud service providers and can identify issues across various cloud environments. Where CASBs fall short is that they identify breaches after they happen. When it comes to getting full visibility and control over the organization's SaaS apps, an SSPM solution would be the better choice, as the security team can easily onboard apps and get value in minutes — from the immediate configuration assessment to its ongoing and continuous monitoring.


11 cybersecurity buzzwords you should stop using right now

The terms whitelist and blacklist date back to the some of the earliest days of cybersecurity. Associating “white” with good, safe, or permitted, and “black” with bad, dangerous, or forbidden, the phrases are still commonly applied to allow or deny use or access relating to various elements including passwords, applications, and controls. Cybersecurity consultant Harman Singh thinks the terms need urgently replacing because of harmful racial overtones associated with them, suggesting allow lists and deny lists serve the same purpose without potentially damaging connotations linked to ethnicity and race. “This is such a small yet significant, change” he tells CSO. “The NCSC made this conscious change last year to avoid racial tone. Still only a handful of companies in the industry have thought about doing this. Why don’t we all follow this example to stamp out such terms?” In a blog post, Emma W, head of advice and guidance at the NCSC, wrote: “You may not see why this matters. If you’re not adversely affected by racial stereotyping yourself, then please count yourself lucky. For some of your colleagues, this really is a change worth making.”


How to Get Started with Competitive Programming?

First and foremost what you need to do is pick out your preferred programming language and become proficient with its syntax, fundamentals, and implementation. You need to make yourself familiar with built-in functions, conditional statements, loops, etc. along with the required advanced concepts such as STL library in C++ or Big Integers in Java. There are various languages out there that are suitable for Competitive Programming such as C, C++, Java, Python, and many more  ... What you need to know – you’ll be suggested by some individuals that it is not necessary to learn DSA priorly for getting started with CP and it can be done along the way however, we recommended you to at least cover the DSA fundamentals like Array, Linked List, Stack, Queue, Tree, Searching, Sorting, Time and Space Complexity, etc. before starting to solve problems and doing competitive problems as it’ll help you to feel confident and solve a majority of the problems. Without knowing Data Structures & Algorithms well, you won’t be able to come up with an optimized, efficient, and ideal solution for the given programming problem.


‘Trojan Source’ Bug Threatens the Security of All Code

“It is already hard for humans to tell ‘this is OK’ from ‘this is evil’ in source code,” Weaver said. “With this attack, you can use the shift in directionality to change how things render with comments and strings so that, for example ‘This is okay” is how it renders, but ‘This is’ okay is how it exists in the code. This fortunately has a very easy signature to scan for, so compilers can [detect] it if they encounter it in the future.” The latter half of the Cambridge paper is a fascinating case study on the complexities of orchestrating vulnerability disclosure with so many affected programming languages and software firms. ... “We met a variety of responses ranging from patching commitments and bug bounties to quick dismissal and references to legal policies,” the researchers wrote. “Of the nineteen software suppliers with whom we engaged, seven used an outsourced platform for receiving vulnerability disclosures, six had dedicated web portals for vulnerability disclosures, four accepted disclosures via PGP-encrypted email, and two accepted disclosures only via non-PGP email. They all confirmed receipt of our disclosure, and ultimately nine of them committed to releasing a patch.”


Cloud, microservices, and data mess? Graph, ontology, and application fabric to the rescue.

Data integration may not sound as deliciously intriguing as AI or machine learning tidbits sprinkled on vanilla apps. Still, it is the bread and butter of many, the enabler of all cool things using data, and a premium use case for concepts underpinning AI, we argued back then. The key concepts we advocated for then have been widely recognized and adopted today in their knowledge graph and data fabric guise: federation and semantics. Back then, the concepts were not as widely adopted, and parts of the technology were less mature and recognized. Today, knowledge graphs and data fabrics are top of mind; just check the latest Gartner reports. The reason we're revisiting that old story is not to bask in some "told you so" self-righteousness, but to add to it. Knowledge graphs and data fabrics can, and hopefully will, eventually, address data integration issues. ... The final part of the process is orchestrating services, i.e. executing, coordinating, and deploying them, in the right order and with the right parameters, wherever they may be - on-premises, in the cloud, or in containers. That creates what Duggal called an "application fabric", as an extension of the notion of a data fabric.


Chaos Engineering Made Simple

The shift toward cloud native technologies has enabled the development of more manageable, scalable and dependable applications, but at the same time it has brought about unprecedented dynamism to critical services. This is due to the multitude of coexisting cloud native components that have to be managed individually. Failure of even a single microservice can lead to a cascading failure of other services, which can cause the entire application deployment to collapse. ... LitmusChaos was created with the primary goal of performing chaos engineering in a cloud native manner, scaling it as per the cloud native norms, managing the life cycle of chaos workflows and defining observability from a cloud native perspective. Chaos experiments help achieve this goal by injecting chaos into the target resources, using simple, declarative manifests. These Kubernetes custom resource (CR) manifests allow for an experiment to be flexibly fine-tuned to produce the desired chaos effect, as well as contain the experiment blast radius so as to not harm other resources in the environment. 


Future of Blockchain: How Will It Revolutionize The World In 2022 & Beyond!

In an ever-evolving world, one of the most relevant use cases for blockchain right now is cryptocurrencies, and it is here to remain that way for some time. However, an even more exciting future is emerging in blockchain technology: non-fungible tokens (NFTs). NFTs are a revolutionary new way of buying and selling digital assets that represent real-world items. All NFTs are unique and can’t be replaced or swapped — they can only be purchased, sold, traded, or given away by the original owner/creator of that asset. NFTs could power a whole new wave of digital collectibles, from rare artwork to one-of-a-kind sneakers and accessories. They could also be used in place of items in video games or other virtual worlds. ... Blockchain could replace this system with a digital identity that is safe, secure, and easy to manage. Instead of proving who you are by recalling some personal, arbitrary piece of information that could potentially be guessed or stolen, your digital identity is based on the uniquely random set of numbers assigned to each user on a blockchain network.


Quantum computers: Eight ways quantum computing is going to change the world

For decades, researchers have tried to teach classical computers how to associate meaning with words to try and make sense of entire sentences. This is a huge challenge given the nature of language, which functions as an interactive network: rather than being the 'sum' of the meaning of each individual word, a sentence often has to be interpreted as a whole. And that's before even trying to account for sarcasm, humour or connotation. As a result, even state-of-the-art natural language processing (NLP) classical algorithms can still struggle to understand the meaning of basic sentences. But researchers are investigating whether quantum computers might be better suited to representing language as a network -- and, therefore, to processing it in a more intuitive way. The field is known as quantum natural language processing (QNLP), and is a key focus of Cambridge Quantum Computing (CQC). The company has already experimentally shown that sentences can be parameterised on quantum circuits, where word meanings can be embedded according to the grammatical structure of the sentence. 


Anomaly Detection Using ML.NET

As the name suggests, it is about finding what is abnormal from what you expect in your day-to-day life. It helps identify data points, observations, or events that deviate from the normal behavior of the dataset. There are now many distributed systems where monitoring their performance is required. A considerable amount of data and events pass through such systems. Anomaly detection gives possibilities to determine where the source of the problem is, which significantly reduces the time to rectify the fault. It also allows us to detect outliers and report them accordingly. These all applications have one common focus that I have mentioned earlier - outliers. These are cases where the data points are distant from the others, do not follow a particular pattern, or match known anomalies. Each of these data points can be useful for identifying these anomalies and responding correctly to them.



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye

Daily Tech Digest - November 01, 2021

Why Is It Good For IT Professionals to Learn Business Analytics?

Most IT professionals who want to broaden their horizons go for business analytics courses. It gives a boost to their career and opens up more job opportunities. It is because IT professionals are well versed with software development and they can link business challenges and provide technical solutions. The main task of a business analyst is to gather and analyze data and sort them according to the requirements of the business. Among other people who take up a job as a business analyst, IT professionals can solve problems, identify risks, and manage technology-related restrictions more efficiently. Therefore, learning business analytics can open more doors for IT professionals. Business Analytics requires hard skills along with soft skills. Business analysts must know how to analyze data trends and convey the information to others and help them apply it on the business side. A person with a strong IT background who understands the system, product, and tools work can learn business analytics to boost their career and enjoy the hybrid role.


How to develop a data governance framework

Rather than begin with a set of strict guidelines that launch across the entire organization at once, they should start with a framework applied to a certain data set that will be used by a specific department for a specific analytics project and then build out from there, according to Matt Sullivant, principal product manager at Alation. "As you're establishing your framework and trying to figure out what you want from the data governance, you have milestones along the way," he said. "You start off small with a small set of data and a small set of policies and then eventually you mature out to more robust processes and tackle additional data domains." Sullivant added that by starting small, there's a better chance of success, and by showing success on a small scale there's a better chance of both organizational leaders and potential end users of data seeing the value of a data governance framework. "A lot of quick wins show the value of a data governance program, and then you can expand from there," he said.


When low-code becomes high maintenance

Low-code offers the promise of rapid app creation on a massive scale but can easily become de-prioritised due to lack of understanding, internal capacity, or a skills gap within the team. This can often mean that objectives are either not aligned or are being missed, stifling return on investment. Just like any IT project, it’s crucial to take a step back before you begin building applications. Review your current systems and processes first, document any strengths and weaknesses, and define what success looks like to your business. These measures will ensure you know which areas require extra attention and resources, while providing clarity around application outcomes. ... The simple ‘drag and drop’ mindset that is associated with low-code tools means there is a temptation to jump into the build without first scoping the business requirements. This largely depends on asking the right questions. However, many organisations struggle to apply this clear thinking when developing apps. A step-by-step approach can help ensure positive outcomes. Think about who will be involved; what you would like to achieve; how you will get there; the barriers; and, the measurement of success. 


Multinational Police Force Arrests 12 Suspected Hackers

The suspected hackers are alleged to have had various roles in organized criminal organizations. They are believed responsible for dealing with initial access to networks, using multiple mechanisms to compromise IT networks, including brute force attacks, SQL injections, stolen credentials and phishing emails with malicious attachments. "Once on the network, some of these cyber actors would focus on moving laterally, deploying malware such as Trickbot, or post-exploitation frameworks such as Cobalt Strike or PowerShell Empire, to stay undetected and gain further access," Europol states. In addition, it is claimed that the criminals would lay undetected in the compromised system for months, looking for further weaknesses in the network before monetising the infection by deploying a ransomware, such as LockerGoga, MegaCortex and Dharma ransomware, among others. "The effects of the ransomware attacks were devastating as the criminals had had the time to explore the IT networks undetected. ..." Europol notes.


Doing the right deals

Deals don’t always produce value. PwC research has shown that 53% of all acquisitions underperformed their industry peers in terms of TSR. And as PwC’s 2019 report “Creating value beyond the deal” shows, the deals that deliver value don’t happen by accident. Success often includes a strong strategic fit, coupled with a clear plan for how that value will be realized. Armed with that knowledge, we set out to better understand the relationship between strategic fit and deal success. ... When we analyzed deal success versus stated strategic fit, we found that the stated strategic intent had little or no impact on value creation, with the logical exception of capability access deals. Whether a deal fits depends minimally on its aim. What matters is whether there is a capabilities fit between the buyer and the target. Indeed, there was little variance among the remaining four types of deals—product or category adjacency, geographic adjacency, consolidation, and diversification—which on average performed either neutrally or negatively from a value-generation perspective compared with the market.


The antidote to brand impersonation attacks is awareness

There is no silver bullet here and the best practices definitely apply. On a high level, I would say ensure the people in your organization are aware and are trained in their security awareness. I mention this first because it’s all about people. These same people work with brands and systems that need to be protected. The most common used attack route is still email and this expands to other communication channels and platforms. It seems obvious to start protecting these channels. Getting back to awareness, this is not just about people, it’s also about being aware of (unauthorized) usage of your organizations brand and to have protection and remediation measures in place when that brand gets abused in an impersonation attack. This might sound overwhelming, and in a way, it is. Similar to security, the work on brand impersonation protection is never entirely done. Can it be simplified? Well yes. Make a risk assessment and start with the first steps that deliver the best ROI on protection. In my view, security is a journey, even when it’s in a close to perfect state in any given moment.


CIO role: Why the first 90 days are crucial

The purpose of the 90-day plan isn’t to have everything sorted out on Day 1, but rather to provide guidelines and milestones for you to achieve. For example, you might set a 30-day goal to meet with all the senior leaders in the organization. The specifics can be hammered out after you’ve had time to assess who the key leaders are and how to connect with them. Your second 30 days (30-60 days) might entail getting to know the mid-level leaders or spending more time with your second-in-command in the IT division. The plan will guide you; the details will evolve as your 90 days elapse. ... The first 90 days are when initial impressions and expectations are created. Set the agenda in a dynamic and intelligent manner so you are seen as an active, engaged, and competent leader from Day 1. If you come out of the gate slowly or ineffectively – or worse, if you stumble badly, you’ll struggle to overcome that reputation. If you come out too aggressively, on the other hand, your peers will be wary and you’ll struggle to build trust. Either extreme will negatively impact your success trajectory.


How to choose an edge gateway

For organizations with significant IoT deployments, edge computing has emerged as an effective way to process sensor data closest to where it is created. Edge computing reduces the latency associated with moving data from a remote location to a centralized data center or to the cloud for analysis, slashes WAN bandwidth costs, and addresses security, data-privacy, and data-autonomy issues. On a more strategic level, edge computing fits into a private-cloud/public-cloud/multi-cloud architecture designed to enable new digital business opportunities. One big challenge of edge computing is figuring out what to do with all the different kinds of data being generated there. Some of the data is simply not relevant or important (temperature readings on a motor that is not overheating). Other data can be handled at the edge; this type of intermediate processing would be specific to that node and would be of a more pressing nature. The cloud is where organizations would apply AI and machine learning to large data sets in order to spot trends ... The fulcrum that balances the weight of raw data generated by OT-based sensors, actuators, and controllers with the IT requirement that only essential data be transmitted to the cloud is the edge gateway.


How to better design and construct digital transformation strategies for future business success

What we are witnessing now is the need to re-consider how end-to-end design is changing the best way to merge the ready-made services in the cloud from a hyperscale or SaaS provider with the telecoms world, while providing connectivity in a secure manner. How does this manifest itself during a transformation within an organisation, and, importantly, how can a business align its strategy to its implementation? The answer lies in having concise messaging built upon a clear strategy. ... With all of this in mind, non-functional designs as well as the functional elements are still crucial and cannot simply be left to the cloud provider, which is still what many businesses believe. Resilience of a service and recovery actions in the event of a failure need deep thought and consideration. Ideally these should be automated via robotic process automation, but for this to be a success, instrumentation of a service and event correlation are needed to truly determine where in the service chain an error has occurred.


Is Monolith Dead?

Monolith systems have the edge when it comes to simplicity. If the development process can somehow avoid turning it into a big ball of mud and if a monolith system (as defined above) can be broken into sub-systems such that each of these sub-systems is a complete unit in itself, and if these subsystems can be developed in a microservices style, we can get best of both worlds. This sub-system is nothing but a “Coarse-Grained Service”, a self-contained unit of the system. A coarse-grained service can be a single point of failure. By definition, it consists of significant sub-parts of a system and so its failure is highly undesirable. If a part of this coarse-grained service fails (which otherwise would have been a fine-grained service itself), it should take the necessary steps to mask the failure, recover from it and report it. However, the trouble begins when this coarse-grained service fails as a whole. Still, it is not the deal-breaker and if the right mechanism is in place for high availability (containerized, multi-zone, multi-region, stateless), there will be very bleak chances for it. 



Quote for the day:

"No man can stand on top because he is put there." -- H. H. Vreeland