Daily Tech Digest - August 03, 2022

Why the future of APIs must include zero trust

Devops leaders are pressured to deliver digital transformation projects on time and under budget while developing and fine-tuning APIs at the same time. Unfortunately, API management and security are an afterthought when the devops teams rush to finish projects on deadline. As a result, API sprawl happens fast, multiplying when all devops teams in an enterprise don’t have the API Management tools and security they need. More devops teams require a solid, scalable methodology to limit API sprawl and provide the least privileged access to them. In addition, devops teams need to move API management to a zero-trust framework to help reduce the skyrocketing number of breaches happening today. The recent webinar sponsored by Cequence Security and Forrester, Six Stages Required for API Protection, hosted by Ameya Talwalkar, founder and CEO and guest speaker Sandy Carielli, Principal Analyst at Forrester, provide valuable insights into how devops teams can protect APIs. In addition, their discussion highlights how devops teams can improve API management and security.


India withdraws personal data protection bill that alarmed tech giants

The move comes as a surprise as lawmakers had indicated recently that the bill, unveiled in 2019, could see the “light of the day” soon. New Delhi received dozen of amendments and recommendations from a Joint Committee of Parliament that “identified many issues that were relevant but beyond the scope of a modern digital privacy law,” said India’s Junior IT Minister Rajeev Chandrasekhar. The government will now work on a “comprehensive legal framework” and present a new bill, he added. ... “The Personal Data Protection Bill, 2019 was deliberated in great detail by the Joint Committee of Parliament 81 amendments were proposed and 12 recommendations were made towards comprehensive legal framework on digital ecosystem. Considering the report of the JCP, a comprehensive legal framework is being worked upon. Hence, in the circumstances, it is proposed to withdraw. The Personal Data Protection Bill, 2019′ and present a new bill that fits into the comprehensive legal framework,” India’s IT Minister Ashwini Vaishnaw said in a written statement Wednesday.


Don't overengineer your cloud architecture

A recent Deloitte study uncovered some interesting facts about cloud computing budgets. You would think budgets would make a core difference in how businesses leverage cloud computing effectively, but they are not good indicators to predict success. Although this could indicate many things, I suspect that money is not correlated to value with cloud computing. In many instances, this may be due to the design and deployment of overly complex cloud solutions when simpler and more cost-effective approaches would work better to get to the optimized value that most businesses seek. If you ask the engineers why they designed the solution this way (whether overengineered or not), they will defend their approach around some reason or purpose that nobody understands but them. ... This is a systemic problem now, which has arisen because we have very few qualified cloud architects out there. Enterprises are settling for someone who may have passed a vendor’s architecture certification, which only makes them proficient in a very narrow grouping of technology and often doesn’t consider the big picture.


Leveraging data privacy by design

Privacy laws and regulations, therefore, can include guidelines for facilitating industry standards, benchmarks for privacy enhancing technologies and funding privacy by design research to incentivise technology designers to enhance privacy safeguard measures in the product designs; thereby promoting technological models that are privacy savvy. The above can be better understood from the following example. For instance, the price paid for a helmet by a motorbike rider is compliance cost as it is an additional purchase requirement for safety over and above his immediate need for using a bike as a tool for commutation. However, a seat belt that is subsumed as a component of a car and not an additional requirement is perceived differently by the owner. Thus, compliance requirements that are perceived as additional obligations result in the perception of increased compliance costs whereas compliance requirements embedded in the design of the product itself are considered as part of the total product price and not separate costs. Privacy by design can thus prompt a shift in a business model whereby through the incorporation of privacy features within the technological design of the product itself


Is it bad to give employees too many tech options?

The most important question in developing (or expanding) an employee-choice model is determining how much choice to allow. Offer too little and you risk undermining the effort's benefits. Offer too much and you risk a level of tech anarchy that can be as problematic as unfettered shadow IT. There isn’t a one-size-fits-all approach. Every organization has unique culture, requirements/expectations, and management capabilities. An approach that works in a marketing firm would differ from a healthcare provider, and a government agency would need a different approach than a startup. Options also vary depending on the devices employees use — desktop computing and mobile often require differing approaches, particularly for companies that employ a BYOD program for smartphones. ... Google is making a play for the enterprise by offering ChromeOS Flex, which turns aging PCs and Macs into Chromebooks. This allows companies to continue to use machines that have dated or limited hardware, but it also means adding support for ChromeOS devices. 


Patterns and Frameworks - What's wrong?

Many people say that we should prefer libraries to frameworks and I must say that might be true. If a library could do the job you need (for example, the communication between a client and a server I presented at the beginning of the article) and meets the performance, security, protocols and any other requirements your service needs to support, then the fact we can have a "Framework" automate some class generations for us might be of minor importance, especially if such a Framework will not be able to deal with the application classes and would force us to keep creating new patterns just to convert object types. ... Yet, they fall short when dealing with app specific types and force us to either change our types just to be able to work with the framework or, when two or more frameworks are involved, there's no way out and we need to create alternative classes and copy data back and forth, doing the necessary conversions, which completely defeats the purpose of having the transparent proxies.


Where are all the technologists? Talent shortages and what to do about them

Instead of looking for that complete match, shift to 80% instead – the other 20% can almost always be met through training, support and development once in the job. Another flexibility is around age. The most sought-after candidates are in the 35-49 age bracket. But don’t rule out the under-35s or the over-50s. There are brilliant people in both groups – one with all the potential for the future, the other with invaluable experience and work knowhow. This brings us to another absolutely key approach: to invest in training and upskilling. I have one client who is looking ahead and can see that they will have a significant software development skills requirement in about four years’ time. So they are training their existing software engineers now, so they can move into these roles when the time comes. There is a growing emphasis among digital leaders on increasing the amount of internal cross-training into tech. This is something that can be applied externally, too. Look outside the business for talent that can be supported into a tech career – people who may be in other fields right now but have the right aptitude, mindset and ambition.


We’re Spending Billions Each Year on Cybersecurity. So Why Aren’t Data Breaches Going Away?

As companies invest heavily in technology, communication, and training to reduce cybersecurity risk and as they begin seeing the positive impact of those efforts, they may let their guard down—not paying as much attention to the risks, not communicating as often, or failing to ensure that new employees (or employees in new positions) are receiving the information and training they need. Cybercrooks only need to be successful once to achieve their goals, but companies need to be successful 100% of the time to avoid being compromised. Consider this: security is subject to the same natural laws that govern the rest of the universe. Entropy is real… we move from order to chaos. ... A strong security culture is a must-have to combat the continuous threats that all companies are subject to. Employees’ security awareness, behaviors and the organization’s culture must be assessed regularly. Policies and training programs should be consistently updated to address the changing threat landscape. Failure to do so puts companies at risk of data theft, business interruption, or falling victim to ransomware scams.


What is supervised machine learning?

A common process involves hiring a large number of humans to label a large dataset. Organizing this group is often more work than running the algorithms. Some companies specialize in the process and maintain networks of freelancers or employees who can code datasets. Many of the large models for image classification and recognition rely upon these labels. Some companies have found indirect mechanisms for capturing the labels. Some websites, for instance, want to know if their users are humans or automated bots. One way to test this is to put up a collection of images and ask the user to search for particular items, like a pedestrian or a stop sign. The algorithms may show the same image to several users and then look for consistency. When a user agrees with previous users, that user is presumed to be a human. The same data is then saved and used to train ML algorithms to search for pedestrians or stop signs, a common job for autonomous vehicles. Some algorithms use subject-matter experts and ask them to review outlying data. Instead of classifying all images, it works with the most extreme values and extrapolates rules from them.


Machine learning creates a new attack surface requiring specialized defenses

While all adversarial machine learning attack types need to be defended against, different organizations will have different priorities. Financial institutions leveraging machine learning models to identify fraudulent transactions are going to be highly focused on defending against inference attacks. If an attacker understands the strengths and weaknesses of a fraud detection system, they can use that to alter their techniques to go undetected, bypassing the model altogether. Healthcare organizations could be more sensitive to data poisoning. The medical field has been some of the earliest adopters of using their massive historical data sets to predict outcomes with machine learning. Data poisoning attacks can lead to misdiagnosis, alter results of drug trials, misrepresent patient populations and more. Security organizations themselves are presently focusing on machine learning bypass attacks that are actively being used to deploy ransomware or backdoor networks. ... The best advice I can give to a CISO today is to embrace patterns we’ve already learned on emerging technologies.



Quote for the day:

"There are three secrets to managing. The first secret is have patience. The second is be patient. And the third most important secret is patience." -- Chuck Tanner

Daily Tech Digest - August 02, 2022

What Women Should Know Before Joining the Cybersecurity Industry

Women still are underrepresented in software engineering and IT. And many times, cybersecurity gets lumped together with those, and with that comes the belief that it requires the same skills. And that's simply not the case. At the core, the job of cybersecurity teams is to assess, prioritize, and work to resolve risks; nothing in there requires a STEM background or understanding of software engineering. Sure, these risks might related to code a developer wrote, or a cloud environment the IT team deployed, but reviewing alerts, assessing the impact to the business and the potential risk, and determining the appropriate course of action — those are not things that require a security professional to be a developer or to moonlight in IT. Computer science skills and backgrounds aren't a barrier to the cybersecurity profession — we're a business function, not a technical one. ... If you're on a cybersecurity team, you're tasked with keeping all these teams safe, each and every day. But this isn't something you can do alone. You need help from all of them in order to deliver that protection.


Overcoming the Top 3 Challenges of Infrastructure Modernization

Container environments like Kubernetes provide similar benefits and challenges as the cloud. Containers empower IT teams to increase efficiency, agility and speed, improving application life cycle management and making it faster and easier to modernize existing applications. Like the cloud, though, containers must be optimized to deliver on their ability to reduce costs and streamline performance. To orchestrate containers effectively, IT must understand how to allocate them. As with cloud provisioning, under-allocating container resources can result in issues with service assurance, while over-allocation can lead to wasted spending, especially since individual application teams tend to request more resources than they need to be safe. Right-sizing container environments is particularly important when containers are used to manage the impact of fluctuating business demands on IT systems. It’s crucial to optimize container environments for your current state, but it’s also important to know what’s coming so resources can be allocated accordingly.


Tracking Ransomware: Here's Everything We Still Don’t Know

ENISA estimates that during the timeframe it studied, there were 3,640 successful ransomware attacks, of which it was only able to obtain details for 623 incidents. "All results and conclusions as presented should take into account this disclaimer concerning the number of incidents used in this analysis" and highlight the overall lack of solid details about so many incidents, it says. "In addition, the fact that we were able to find publicly available information for [only] 17% of the cases highlights that when it comes to ransomware, only the tip of the iceberg is exposed and the impact is much higher than what is perceived," it says. Indeed, most attacks never get publicly reported, because victims don't want the negative publicity. Unfortunately, getting a victim to pay quickly and secretly suits ransomware-wielding attackers too. Law enforcement has a tough time identifying individual attackers or groups at work, prioritizing them based on impact, and issuing warnings to help other organizations block groups' commonly used tactics. 


Managing Kubernetes Secrets with the External Secrets Operator

ESO is a Kubernetes operator that connects to external secrets-management systems like the ones we mentioned above and reads secret information and injects the values into Kubernetes secrets. It is a collection of custom API resources that provide a user-friendly abstraction for the external APIs that manages the lifecycle of the secrets for us. Like all other Kubernetes operators, ESO is composed of some main components:Custom Resource Definitions (CRD): These define the data schema of the settings available for the operator, in our case SecretStore and ExternalSecret definitions. Programmatic Structures: These define the same data schema as the CRDs above using the programming language of choice, in our case Go. Custom Resource (CR): These hold the values for the settings defined by the CRDs and describe the configuration for the operator. Controller: This is where the actual work takes place. Controllers act on custom resources and are responsible for creating and managing the resources. They can be created in any programming language, and ESO controllers are created in Go.


Can artificial intelligence really help us talk to the animals?

Raskin is the co-founder and president of Earth Species Project (ESP), a California non-profit group with a bold ambition: to decode non-human communication using a form of artificial intelligence (AI) called machine learning, and make all the knowhow publicly available, thereby deepening our connection with other living species and helping to protect them. A 1970 album of whale song galvanised the movement that led to commercial whaling being banned. What could a Google Translate for the animal kingdom spawn? The organisation, founded in 2017 with the help of major donors such as LinkedIn co-founder Reid Hoffman, published its first scientific paper last December. The goal is to unlock communication within our lifetimes. “The end we are working towards is, can we decode animal communication, discover non-human language,” says Raskin. “Along the way and equally important is that we are developing technology that supports biologists and conservation now.” Understanding animal vocalisations has long been the subject of human fascination and study. 


Microsoft's new security tool lets you to see your systems like a hacker would

The attack surface management service could be useful given data that attackers start scanning the internet for exposed vulnerable devices within 15 minutes of a major flaw's public disclosure and generally continue scanning the internet for older flaws like last year's nasty Exchange Server flaws, ProxyLogon and ProxyShell. This service discovers a customer's unknown and unmanaged resources that are visible and accessible from the internet – giving defenders the same view an attacker has when they select a target. Defender EASM helps customers discover unmanaged resources that could be potential entry points for an attacker. Across MSTIC and Microsoft 365 Defender Research, Microsoft is tracking 250 different actors and ransomware families. "We're providing intelligence across all of them and bringing that into your security team — not just to learn the latest news… but also to explore it, so if I see an indicator, I might explore where that might live on the network and connect that to what I'm seeing in my company. It's like a workbench for analysts inside a company," says Lefferts.


Microsoft hails success of hydrogen fuel cell trial at its New York datacentre

The company deployed a proton exchange membrane (PEM) fuel cell technology at its Latham site, which generates electricity by facilitating a chemical reaction between hydrogen and oxygen that creates no carbon emissions whatsoever. “The PEM fuel cell test in Latham demonstrated the viability of this technology at three megawatts, the first time at the scale of a backup generator at a datacentre,” the blog post stated. “Once green hydrogen is available and economically viable, this type of stationary backup power could be implemented across industries – from datacentres to commercial buildings and hospitals.” The company first started experimenting with the use of PEM fuel cells as an alternative to diesel backup generators in 2018, having previously tested and ruled out the use of natural gas-powered solid oxide fuel cells on cost grounds. This work gave way to a collaboration between Microsoft and the National Renewable Energy Laboratory in 2018 that saw the pair deploy a 65 kW PEM fuel cell generator to power a rack of computers.


Legislators Gear Up to Take On Cloud Outages

The good news, if you’re in favor of this kind of regulation (or the bad news if you’re not) is that regulatory bodies across the Atlantic seem to be sliding towards a new compliance regime for cloud providers along these lines. A paper from the UK Treasury, published last month, revealed that Treasury and Bank of England have been mulling a new regulatory framework for “critical” cloud-based third-party services since 2019. They propose fairly broad powers to enforce standards and investigate violations. This isn’t legislation, of course; that step, the paper notes, will come “when parliamentary time allows,” and since Britain won’t have a government before September, we will likely be hearing more of this in 2023. Meanwhile, on the Continent, the European Council and Parliament came to an understanding in May that the (Digital Operational Resilience Act (DORA), a regulatory framework that is not yet in law, will be able to “maintain resilient operations through a severe operational disruption” in finance, including on cloud platforms. 


What transformational leaders do differently

A transformational leader actively listens and establishes trust with their team, encourages diversity of thought, and creates an environment where the team feels they “belong” and are comfortable sharing ideas without judgment. Effective change cannot happen without everyone working together against a common purpose, recognizing that a team is more important than any individual, and always putting the company first when making decisions. A leader must create an environment where team members feel seen, heard, and fully understand the company and department strategy and goals. As a multi-generational, family-owned business, Southern Glazer’s culture has an entrepreneurial spirit that challenges team members to think beyond the here and now, focusing on how we can do something better than before. Technology is business, and it is the responsibility of the IT team to bring innovative ideas that drive transformational change, to digitally transform across all company functions to create the right employee and business partner experience while also delivering operational efficiency and effectiveness.


Entrepreneurship for Engineers: Solo Founder or Co-Founder?

Founding a startup is hard, and it can be a lonely road, especially for solo founders. There are a lot of issues that come up in a startup that you can’t talk about with your employees, you can’t discuss with your investors, your friends won’t understand (unless they are also startup founders themselves) — and your spouse won’t get, either. “I was a founder, and I had a co-founder, and I can not thank God enough to have had that opportunity,” said Dokania. “It definitely makes it easier emotionally.” Raman echoed this sentiment. “It’s incredibly hard to build a company, and doing so while knowing that you are entirely responsible for the success or failure through that entire journey is exceptionally stressful,” she said. “The highs are very high, but the lows are so extremely low.” Many founders, especially early on, think of the advantage of a co-founder as being about finding someone with complementary skills, so you can build the business while each focusing on your strengths. However, Dokania and Raman agreed that the primary benefit of having co-founders is emotional — because humans are social animals and building a company is stressful enough without also being lonely and isolating.



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward

Daily Tech Digest - August 01, 2022

4 fundamental practices in IoT software development

One of the greatest concerns in IoT is security, and how software engineers address it will play a deeper role. As devices interact with each other, businesses need to be able to securely handle the data deluge. There have already been many data breaches where smart devices have been the target, notably Osram, which was found to have vulnerabilities in its IoT lightbulbs, potentially gifting an attacker access to a user’s network and the devices connected to it. Security needs to be tackled at the start of the design phase, making requirement tradeoffs as needed, rather than adding as a mere ‘bolt on’. This is highly correlated to software robustness. It may take a little bit more time to design and build robust software upfront, but secure software is more reliable and easier to maintain in the long run. A study by CAST suggests that one-third of security problems are also robustness problems, a finding that is borne out in our field experience with customers. Despite software developers’ best intentions, management is always looking for shortcuts. In the IoT ecosystem, first to market is a huge competitive driver, so this could mean that security, quality and dependability are sacrificed for speed to release. 


Accountability in algorithmic injustice

Often, journalists fixate on finding broken or abusive systems, but miss out on what happens next. Yet, in the majority of cases, little to no justice is found for the victims. At most, the faulty systems are unceremoniously taken out of circulation. So, why is it so hard to get justice and accountability when algorithms go wrong? The answer goes deep into the way society interacts with technology and exposes fundamental flaws in the way our entire legal system operates. “I suppose the preliminary question is: do you even know that you’ve been shafted?” says Karen Yeung, a professor and an expert in law and technology policy at the University of Birmingham. “There’s just a basic problem of total opacity that’s really difficult to contend with.” The ADCU, for example, had to take Uber and Ola to court in the Netherlands to try to gain access to more insight on how the company’s algorithms make automated decisions on everything from how much pay and deductions drivers receive, to whether or not they are fired. Even then, the court largely refused their request for information. Further, even if the details of systems are made public, that’s no guarantee people will be able to fully understand it either – and that includes those using the systems.


Data Mesh: To Mesh or not to Mesh?

Data Mesh allows teams to curate/generate data and create usable data products for other teams. It also makes certain that platform teams can put their efforts into data engineering while data professionals can handle domain-specific data issues. While business data professionals are responsible for the quality and reliability of the data their teams produce, they can take assistance from platform teams in the face of technical glitches. Apart from that, Data Mesh design is inclined towards business users and requires relatively minor interference from platform teams. This is unlike centralized data teams that are responsible for everything, from data frameworks and access to dealing with data-related requests. To conclude, Data Mesh or the decentralized architecture encourages each party to excel in their area of expertise. The platform teams need to focus on technology, engineering, and data pipelines, while the data professionals are accountable for ensuring data quality. This holistic approach ensures end-users can perform their tasks by leveraging data insights without investing time in acquiring the results of a custom request.


Chase CIO details what entry-level job-seekers need to succeed in Fintech

Never stop learning. The skills you mastered a few years ago may be no longer relevant today, which is why it’s important to be open to constantly learning. Whether you are starting your career or have years of experience, take it upon yourself to learn new skills and technologies. ... The skills required to be a technologist have evolved, but also the ways with colleagues across lines of business. One change we’ve really embraced as an organization is embarking on an agile and product transformation. We’ve taken advantage of the opportunity that came with the changing behaviors of consumers over the past few years to really embrace agile at a different scale. This matters tremendously, because when we deploy code or build an entirely new product, it helps millions of consumers reach their financial goals. The pace of change has accelerated, but the focus on making it easier for our customers to bank with Chase has not. Today, we’ve reorganized ourselves away from project-based teams into product-based teams. Each product now has a dedicated tech, product, design, and data & analytics leader to help speed up decision making and improve connectivity and collaboration.


Attacks using Office macros decline in wake of Microsoft action

"It's a hugely important step Microsoft is taking to start blocking these macros by default, especially due to how invisible macros are to the majority of users," adds Nathan Wenzler, chief security strategist at Tenable, a vulnerability scanning company. "But that doesn't mean the threat is eradicated or we shouldn't continue to remind users to be vigilant about opening files from untrusted sources." Other companies are seeing threat actors switching tactics because of Microsoft's move, too. "The adversaries are aware of it," observes Tim Bandos, executive vice president of cybersecurity at Xcitium, a maker of an endpoint security suite. "They're testing out new ways of working around it because they're clearly not as successful now that Microsoft has made this change." Users of one notorious malicious program, known as Emotet, have already begun shifting tactics, he notes. "We've seen them shift recently from leveraging macros to using URLs to OneDrive or Google Drive," he says.


Solana blockchain and the Proof of History

The consensus mechanism is a fundamental characteristic and differentiator among blockchains. Solana's consensus mechanism has several novel features, in particular the Proof of History algorithm, which enables faster processing time and lower transaction costs. How PoH works is not hard to grasp conceptually. It's a bit harder to understand how it improves processing time and transaction costs. The Solana whitepaper is a deep dive into the implementation details, but it can be easy to miss the forest for the trees. Conceptually, the Proof of History provides a way to cryptographically prove the passage of time and where events fall in that timeline. This consensus mechanism is used in tandem with another more conventional algorithm like the Proof of Work (PoW) or Proof of Stake (PoS). The Proof of History makes the Proof of Stake more efficient and resilient in Solana. You can think of PoH as a cryptographic clock. It timestamps transactions with a hash that guarantees where in time the transaction occurred as valid. This means the entire network can forget about verifying the temporal claims of nodes and defer reconciling the current state of the chain.


Why and How our AI needs to understand causality

Introducing causality to machine learning can make the model outputs more robust, and prevent the types of errors described earlier. But what does this look like? How can we encode causality into a model? The exact approach depends on the question we are trying to answer and the type of data we have available. ... They trained the model to ask “if I treat this disease, which symptoms would go away?” and “if I don’t treat this disease, which symptoms would remain?”. They encoded these questions as two mathematical formulae. Using these questions brings in causality: if treating a disease causes symptoms to go away, then it’s a causal relationship. They compared their causal model with a model that only looked at correlations and found that it performed better — particularly for rarer diseases and more complex cases. Despite the great potential of machine learning, and the associated excitement, we must not forget our core statistical principles. We must go beyond correlation (association) to look at causation, and build this into our models. 


Cyberattack prevention is cost-effective, so why aren’t businesses investing to protect?

To measure the success of an investment, you first need to quantify the cost of what you’re trying to protect. In a simplified model, the first step is to measure the given benefits of protection, this starts with an asset valuation. How valuable is this data to me? Those in charge of the budget need to execute the risk of that data not being protected. If I don’t take the necessary measures to mitigate the risk by investing in preventative cyber-security tools, how costly could this be when a breach occurs? It is more cost-effective to validate an organisation’s controls rather than spending money on more tools. By adopting specialised frameworks to counteract cyber threats, for instance, running a threat-informed defence, utilising automated platforms such as Breach-and-Attack Simulation (BAS), CISO’S can continuously test and validate their system. Similar to a fire drill, BAS can locate which controls are failing, allowing organisations to remediate the gaps in their defence, making them cyber ready before the attack occurs.


Cyber Resiliency: How CIOs Can Prepare for a Cloud Outage

Beyond security issues, cloud outages can open the door to cascading disruptions affecting both routine business and mission-critical applications. “This can lead to [issues] ranging from revenue loss to more serious impacts -- such as putting lives at risk in the case of critical health care applications,” explains Ravikanth Ganta, a senior director at business consulting firm Capgemini Americas. A cloud outage’s seriousness hinges on several factors, including organization preparedness, the zone regions affected, and the services impacted. “In many cases, businesses that build and run their applications in the cloud can endure a cloud outage with little to no impact if they architect their applications to take advantage of the automated failover capabilities readily available in the cloud,” Potter notes. Modular applications designed to leverage loosely coupled services will typically experience only a minor drop in availability or performance during a vendor outage and, in many cases, may not be affected all. “Customers that ... haven’t architected their applications to gracefully failover or redirect traffic to unimpacted zones or regions, will face greater availability challenges when a cloud provider experiences an outage,” Potter says.


Why DesignOps Matters: How to Improve Your Design Processes

“A foundational aspect of DesignOps is the adoption of agile work breakdown structures (WBSs) to organize UX work from alignment with broad strategic objectives to screen-level details in a single EAP tool. While this feels foreign to most UX practitioners at first, agile WBS maps quite well to UX work. The business and operational benefits of this approach are profound, including more accurate plans, estimates, tracking and reporting.” With a single working environment for managers, designers, developers, and even stakeholders as part of the DesignOps strategy, everyone can easily align their work and tasks, test and comment on prototypes in real time, eliminate design handoffs, reduce costly iterations, keep track of progress and identify bottlenecks. ... There’s no such thing as a designer who can handle every process and task because in the end, they do everything but the actual design. Digital product design is a multi-layered job that requires different experienced units in particular fields. Just as there is a need for a separation between UX and UI design with two distinct experts handling each, there is a need for a dedicated DesignOps person.



Quote for the day:

"The task of the leader is to get his people from where they are to where they have not been." -- Henry A. Kissinger

Daily Tech Digest - July 31, 2022

Best practices for recovering a Microsoft network after an incident

Often a recovery process is different for different sized organizations. A small business might just want to be back functional as soon as possible while a medium-sized business might take the time to do a root cause analysis. According to the NIST document, “Identifying the root cause(s) of a cyber event is important to planning the best response, containment, and recovery actions. While knowing the full root cause is always desirable, adversaries are incentivized to hide their methods, so discovering the full root cause is not always achievable.” If you use Microsoft Defender for Business Server, Microsoft recommends proactive adjustments to your server to ensure that you can best prevent attacks, specifically that you use the same attack surface reduction rules recommendation for workstations. One example from the screen image below, you want to block all office applications from creating client processes. As you rebuild your network after an incident, remember these settings as you often redeploy servers with default settings. You might not have remembered or documented all your settings that you need to do to better protect your network.


How AI Is Transforming The Future Of The Finance Industry

Since the entire foundation of AI is learning from past data, it only seems sensible that AI would flourish in the financial services industry, where keeping books and records is a given for businesses. Consider the use of credit cards as an example. Today, we utilize credit scores to determine who is and is not eligible for credit cards. However, it is not always advantageous for businesses to divide people into “haves” and “have-nots.” Instead, information about a person’s loan repayment patterns, the number of loans that are still open, the number of credit cards that person has already, etc. can be used to tailor the interest rate on a card so that the financial institution issuing the card feels more comfortable with it. ... When it comes to security and fraud detection, AI is on top. It can leverage historical spending patterns across various transaction instruments to unexpected flag activity, such as using a foreign card shortly after it has been used elsewhere or an effort to withdraw money in an unusual amount for the account in the issue. The system has no qualms about learning, which is another excellent aspect of AI fraud detection. 


Meta faces new FTC lawsuit for VR company acquisition

On Wednesday, the FTC sued Meta in an attempt to block its acquisition of virtual reality technology company Within Unlimited and its VR fitness app Supernatural. The FTC in a press release said Meta's "virtual reality empire" already includes a virtual reality fitness app and alleged Meta is attempting to "buy its way to the top." "Meta already owns a best-selling virtual reality fitness app, and it had the capabilities to compete even more closely with Within's popular Supernatural app," John Newman, FTC Bureau of Competition deputy director, said in the release. "But Meta chose to buy market position instead of earning it on the merits. This is an illegal acquisition, and we will pursue all appropriate relief." According to Meta's statement in response to the lawsuit, the FTC's case is "based on ideology and speculation, not evidence." ... It's not the first time the FTC has accused Meta of buying out the competition. In an ongoing lawsuit against the company, the FTC alleged that Meta's previous Instagram and WhatsApp acquisitions served to kill what the company viewed as competition to its popular social media site Facebook.


Blockchain Applications That Make Sense Right Now

Using blockchain technology, users can create digital assets that are verifiable, scarce and portable – the core properties of a “token”. The owner of a blockchain-generated token can be sure they are the only owner of a limited quantity item. These are tokens that have genuine value as they solve the ownership challenge digital assets have had since day one. This is the core building block that is not possible in Web2’s centralized world. Yes, Fortnite can sell you virtual clothing, and a bank can show a deposit in your account, but those assets are at the behest of the central actors. Creators are similarly beholden to the platforms that distribute their products, and their popularity is governed by their algorithms and business interests of those centralized platforms. With blockchain, new types of assets can be created that hold value outside any centralized platform (even though those platforms may still be very relevant for creation, display and distribution). No centralized actor can unilaterally change the ownership of an asset ‘on-chain’, and rules are visible in public code. Ownership can be ascertained with certainty for all who examine it.


Yes, you are being watched, even if no one is looking for you

Whether or not you pass under the gaze of a surveillance camera or license plate reader, you are tracked by your mobile phone. GPS tells weather apps or maps your location, Wi-Fi uses your location, and cell-tower triangulation tracks your phone. Bluetooth can identify and track your smartphone, and not just for COVID-19 contact tracing, Apple’s “Find My” service, or to connect headphones. People volunteer their locations for ride-sharing or for games like Pokemon Go or Ingress, but apps can also collect and share location without your knowledge. Many late-model cars feature telematics that track locations–for example, OnStar or Bluelink. All this makes opting out impractical. The same thing is true online. Most websites feature ad trackers and third-party cookies, which are stored in your browser whenever you visit a site. They identify you when you visit other sites so advertisers can follow you around. Some websites also use key logging, which monitors what you type into a page before hitting submit. Similarly, session recording monitors mouse movements, clicks, scrolling and typing, even if you don’t click “submit.”


How remote work disrupted global supply chains

To make matters worse, many U.S.-based distributors and retailers decided to bulk up their inventories to hedge against shortages. The surge of e-commerce contributed to the disruptive spiral by making two-day shipping a necessity. In addition, the resulting shortage of warehouse space worsened bottlenecks by pushing supplies back to shipping docks and freight terminals. While remote work isn’t entirely to blame for the supply chain crisis, it clearly kicked off a sequence of events that took on a life of its own. Zoom, Google Docs, and Amazon undermined the assumption that history would repeat itself. When is it all going to end? Experts disagree. Most say things will probably improve for the rest of this year and return to something close to normal by the end of 2023. But even the Federal Reserve Bank of Cleveland recently admitted that the sources it relies upon for intelligence “are mostly based on hope rather than on concrete evidence.” In the meantime, the crisis has also cast the spotlight on the delicate interconnections that hold the world’s supply lines together and the effects that minor disruptions at the far end of the chain can have further upstream.


Network security depends on two foundations you probably don’t have

Any use of the network creates traffic and traffic patterns. Malware that’s probing for vulnerabilities is an application, and it also generates a traffic pattern. If AI/ML can monitor traffic patterns, it can pick out a malware probe from normal application access. Even if malware infects a user with the right to access a set of applications, it’s unlikely the malware would be able to duplicate the traffic pattern that user generated with legitimate access. Thus, AI/ML could detect a difference, and create an alert. That alert, like a journal alert on unauthorized connections, would then be followed up to validate the state of the user’s device security. The advantage of the AI/ML traffic pattern analysis is that it can be effective even when user identity is difficult to pin down, so explicit connection authorization is problematic. In fact, you can do traffic pattern analysis at any level from single users to the entire network. Think of it as involving a kind of source/destination-address-logging process; at a given point, have I seen packets from or to this address or this subnetwork before? If not, then a more detailed analysis may be in order, or even an alert.


Building resilience against emerging security threats

Threats such as malware and data breaches almost always rely on misconfigured systems to succeed. Perhaps a default password hasn’t been changed, a cloud storage instance has been set to public, or a dangerous port is accidentally left open to the internet. These are all errors that can be hard to spot in a complex data centre. It’s a time-consuming task that may not be a top priority amongst competing business goals, meaning the vulnerability remains unidentified. Configuration management tools help here by scanning the entire estate, from cloud storage to local servers, websites to network devices, and more, identifying misconfigurations. They are vendor agnostic and surface anomalies that might otherwise go unnoticed – until it is too late. Auditing the estate in this way gives CISOs the visibility and control they need to effectively monitor their estate and be proactive in remediating misconfigurations. Armed with this insight, the company’s risk is reduced and its resilience is enhanced. 


SQL Server 2022: Here’s what you need to know

If you’ve ever looked at the claims for blockchains and thought that an append-only database could do that without all the work of designing and maintaining a distributed system that likely doesn’t scale to high-throughput queries (or the environmental impact of blockchain mining), another feature that started out in Azure SQL and is now coming to SQL Server 2022 is just what you need. “Ledger brings the benefits of blockchains to relational databases by cryptographically linking the data and their changes in a blockchain structure to make the data tamper-evident and verifiable, making it easy to implement multi-party business process, such as supply chain systems, and can also streamline compliance audits,” Khan explained. For example, the quality of an ice cream manufacturer’s ice cream depends on both the ingredients that its suppliers send and the finished ice cream it delivers being shipped at the right temperature. If the refrigerated truck has a fault, the cream might curdle, or the ice cream might melt and then refreeze once it’s in the store freezer. By collecting sensor information from everyone in its supply chain, the ice cream manufacturer can track down where the problem is.


7 benefits of using design review in your agile architecture practices

For an enterprise to practice architecture in an agile environment, several architects and designers must support the project's agile value streams, which are the actions taken from conception to delivery and support that add value to a product or service. One organization may have a distinct team of architects producing solution designs and templates. Another might place the architect inside the agile squad in a dual senior engineer role. ... The things involved in a design review include:The designer is the person who wants to solve a problem. The documentation is the document at the center of attention. It contains information regarding all aspects of the problem and the proposed solution. The reviewer is the person who will review the documentation. The process includes the agreed-upon rules and interactions that define the designer's and reviewer's communications. It may stand alone or be part of a bigger process. For example, in a software development life cycle, it could precede development, or in an API specification, it could include evaluating changes. The review scope is the area the reviewer tries to cover when reviewing the documentation (technical or not).



Quote for the day:

"You've got to risk the terrible and pathetic, in order to get to the graceful and elegant."

Daily Tech Digest - July 30, 2022

Google Drive vs OneDrive: Which cloud solution is right for you?

Google Drive is host to the majority of the cloud storage features that individuals have come to expect. Even with the free plan, users get access to a web interface, a mobile app, and sharing settings that can be adjusted at the admin level. Microsoft OneDrive users will enjoy similar functionality, including automatic syncing, where users indicate the files and folders they want to be backed up, so they are automatically synced with copies in the cloud. One of the biggest divides facing users when determining whether Google Drive or OneDrive is the best fit for them concerns their operating system of choice. ... Fans of Word, Excel, and the like, can still use Google Drive but may have to convert documents into Docs, Sheets, and other Google-made alternatives. That’s not a major issue but might affect how you perceive the performance of each cloud solution. Although there’s not much to choose from in terms of performance, it’s worth pointing out that Microsoft Office, which is usually employed as an offline tool, will take up more storage space than Google Workspace, which can be accessed via your web client. If storage is a major concern for you, this might be worth keeping in mind.


XaaS isn’t everything — and it isn’t serviceable

BPOs and XaaS do share a characteristic that might, in some situations, be a benefit but in most cases is a limitation, namely, the need to commoditize. This requirement isn’t a matter of IT’s preference for simplification, either. It’s driven by business architecture’s decision-makers’ preference for standardizing processes and practices across the board. This might not seem to be an onerous choice, but it can be. Providing a service that operates the same way to all comers no matter their specific and unique needs might cut immediate costs but can be, in the long run, crippling. Imagine, for example, that Human Resources embraces the Business Services Oriented Architecture approach, offering up Human Resources as a Service to its internal customers. As part of HRaaS it provides Recruiting as a Service (RaaS). And to make the case for this transformation it extols the virtues of process standardization to reduce costs. Imagine, then, that you’re responsible for Store Operations for a highly seasonal retailer, one that has to ramp up its in-store staffing from Black Friday through Boxing Day. 


Myth Busting: 5 Misconceptions About FinOps

“FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions.” ... With traditional procurement models, central teams retained visibility and control over expenditures. While this would add layers of time and effort to purchases, this was accepted as a worthwhile tradeoff. Part of the reason for FinOps has come into existence is that it enables teams to break away from the rigid, centrally controlled procurement models that used to be the norm. Rather than having a finance team that acts as a central gatekeeper and bottleneck, FinOps enables teams to fully leverage opportunities available for automation in the cloud. Compared to rigid, monthly, or quarterly budget cycles—and being blindsided by cost overruns long after the fact—teams move to continuous optimization. Real-time reporting and just-in-time processes are two of the core principles of FinOps. 


Selenium vs Cypress: Does Cypress Replace Selenium?

Cypress test framework captures snapshots during test execution. It enables QAs or software developers to hover over a precise command in the Command Log to notice exactly what happened at that specific phase. One does not require adding implicit or explicit wait commands in testing scripts, unlike Selenium. It waits for assertions and commands automatically. QAs or Developers can use Stubs, Clocks, and Spies to validate and control the behavior of server responses, timers, or functions. The automatic scrolling operation makes sure that the component is in view prior to performing any activity (for instance Clicking on a button). Previously Cypress supported only Google Chrome tests but, with current updates, Cypress now offers support for Mozilla Firefox as well as Microsoft Edge browsers. As the developers or programmer writes commands, this tool executes them in real-time, giving visual feedback as they run. It also carries brilliant documentation. Test execution for a local Se Grid can be ported to function with a cloud-based Selenium Grid with the least effort.


5 Advanced Robotics And Industrial Automation Technologies

One of the most critical advances in robotics and industrial automation technologies is the development of autonomous vehicles. These vehicles can drive themselves, making them safer and more efficient than traditional vehicles. Autonomous vehicles can be used in a variety of ways. For example, they can be used to transport goods around a factory. They can also be used to help people search for objects or people. In all cases, autonomous vehicles are much safer than traditional vehicles. As autonomous vehicles become more common, they will significantly impact the automotive industry. They will reduce the time people need to spend driving cars. They will also reduce the number of accidents that happen on the road. ... One of the most critical safety features of advanced robotics and industrial automation technologies is their danger detection systems. These systems help to protect workers from dangerous situations. One type of danger detection system is the automatic emergency braking system. This system uses cameras and sensors to detect obstacles on the road and brake automatically if necessary.


How to use the Command pattern in Java

The Command pattern is one of the 23 design patterns introduced with the Gang of Four design patterns. Command is a behavioral design pattern, meaning that it aims to execute an action in a specific code pattern. When it was first introduced, the Command pattern was sometimes explained as callbacks for Java. While it started out as an object-oriented design pattern, Java 8 introduced lambda expressions, allowing for an object-functional implementation of the Command pattern. This article includes an example using a lambda expression in the Command pattern. As with all design patterns, it's very important to know when to apply the Command pattern, and when another pattern might be better. Using the wrong design pattern for a use case can make your code more complicated, not less. We can find many examples of the Command pattern in the Java Development Kit, and in the Java ecosystem. One popular example is using the Runnable functional interface with the Thread class. Another is handling events with an ActionListener.


Singleton Design Pattern in C# .NET Core – Creational Design Pattern

The default constructor of the Singleton class is private, by making the constructor private the client code has been restricted from directly creating the instance of the Singleton class. In absence of the public constructor, the only way to get the object of the Singleton class is to use the global method to request an object i.e. the static GetInstance() method in the Singleton class should be used to get the object of the class. The GetInstance() method creates the object of the Singleton class when it is called for the first time and returns that instance. All the subsequent requests for objects to the GetInstance() method will get the same instance of the Singleton class which was already created during the first request. This standard implementation is also known as lazy instantiation as the object of the singleton class is created when it is required i.e. when there is a request for the object to the GetInstance() method. The main problem with the standard implementation is that it is not thread-safe. Consider a scenario where 2 different requests hit the GetInstances() method at the same time so in that case, there is a possibility that two different objects might get created by both the requests.


Here’s when to use data visualization tools, and why they’re helpful

Successful data visualization tools will help you understand your audience, set up a clear framework to interpret data and draw conclusions, and tell a visual story that might not come off as clean and concise with raw data points. Data visualization tools—when used properly—will help to better tell a given story and make it possible to better pull information, see trending patterns, and draw conclusions from large data sets. Data visualization tools also lean into a more aesthetically pleasing approach to mapping and tracking data. It goes beyond simply pasting information onto a pie chart and instead uses design know-how, color theory, and other practices to ensure information is presented in an interesting but easy-to-understand manner. Although data visualization tools have always been popular in the design space, the right data visualization tools can aid just about any field of work or personal interest. For example, data visualization tools can help journalists and editors track trending news stories to better understand reader interest.


9 Tips for Modernizing Aging IT Systems

Once you’ve identified where the failures are in aging systems, compute the costs in fixes, patches, upgrades, and add-ons to bring the system up to modern requirements. Now add any additional costs likely to be incurred in the near future to keep this system going. Compare the total to other available options, including a new or newer system. “While this isn’t a one-size-fits-all approach, the last 2.5 years have proven just how quickly priorities can change,” says Brian Haines, chief strategy officer for FM:Systems, an integrated workspace management system software provider. “Rather than investing in point solutions that may serve the specific needs of the organization today, a workplace tech solution that offers the ability to add or even remove certain functions later to the same system means organizations can more efficiently respond to ever-changing business, employee, workplace, visitor and even asset needs going forward.” “This also helps IT teams drastically reduce the time needed to shop for, invest in, and deploy a separate solution that may or may not be compatible,” Haines adds.


CISA Post-Quantum Cryptography Initiative: Too Little, Too Late?

Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk remediation, agreed that the move comes a little late, but said CISA’s initiative is still a good step. “People have been saying for years that the development of quantum computing would lead to the end of cryptography as we know it,” he said. “With developments in the field bringing us closer to a usable quantum computer, it’s past time to think about how to deal with the future of cryptography.” He pointed out the modern internet relies heavily on cryptography across the board, and quantum computing has the potential to break a lot of that encryption, rendering it effectively useless. “That, in turn, would effectively break many of the internet services we’ve all come to rely on,” Parkin said. “Quantum computing is not yet to the point of rendering conventional encryption useless—at least that we know of—but it is heading that way.” He said he believes the government is in the position to set encryption standards and expectations for normal use and can work closely with industry to make sure the standards are both effective and practical.



Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville

Daily Tech Digest - July 29, 2022

How to apply security at the source using GitOps

Let’s talk about the security aspects now. Most security tools detect potential vulnerabilities and issues at runtime (too late). In order to fix them, either a reactive manual process needs to be performed (e.g., modifying directly a parameter in your k8s object with kubectl edit) or ideally, the fix will happen at source and will be propagated all along your supply chain. This is what is called “Shift Security Left”. From fixing the problem when it is too late to fixing it before it happens. This doesn’t mean every security issue can be fixed at the source, but adding a security layer directly at the source can prevent some issues. ... Imagine you introduced a potential security issue in a specific commit by modifying some parameter in your application deployment. Leveraging the Git capabilities, you can rollback the change if needed directly at the source and the GitOps tool will redeploy your application without user interaction. ... Those benefits are good enough to justify using GitOps methodologies to improve your security posture and they came out of the box, but GitOps is a combination of a few more things. 


How to Open Source Your Project: Don’t Just Open Your Repo!

When opening your source code, the first task should be to select a license that fits your use case. In most cases, it is advisable to include your legal department in this discussion, and GitHub has many great resources to help you with this process. For StackRox, we oriented ourselves on similar Red Hat and popular open source projects and picked Apache 2.0 where possible. After you’ve decided on what parts you open up and how you will open them, the next question is, how will you make this available? Besides the source code itself, for StackRox, there are also Docker images, as mentioned. That means we also open the CI process to the public. For that to happen, I highly recommend you review your CI process. Assume that any insecure configuration will be used against you. Review common patterns for internal CI processes like credentials, service accounts, deployment keys or storage access. Also, it should be abundantly clear who can trigger CI runs, as your CI credits/resources are usually quite limited, and CI integrations have been known to run cryptominers or other harmful software.


New hardware offers faster computation for artificial intelligence, with much less energy

Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial "neurons" and "synapses" that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing. A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain. Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques.


Simplifying DevOps and Infrastructure-as-Code (IaC)

Configuration drift is what happens when a declarative forward engineering model no longer matches the state of a system. It can happen when a developer changes a model’s code without updating the systems built using that model. It could also be the result of an engineer that does exploratory ad-hoc operations and changes a system but fails to go back into the template and update its code. Is it realistic to ban operators from ad-hoc exploration? Actually, some companies have policies forbidding any operator or developer to touch a live production environment. Ironically, when a production system breaks, it’s the first rule overridden: Ad-hoc exploration is welcomed by anyone able to get to the bottom of the issue. Engineers who adopt IaC usually don’t like the work that comes with remodeling an existing system. Still, because of high demand fueled by user frustration, there’s a tool for every configuration management language—they just fail to live up to engineer expectations. The best-case scenario is that an engineer can use a reverse-engineered template to copy and paste segments into one but will need to manually write it elsewhere.


Is Your Team Agile Or Fragile?

Agility is the ability to move your body quickly and easily and also the ability to think quickly and clearly, so it makes sense why project managers chose this term. The opposite of agile is fragile, rigid, clumsy, slow, etc. Not only are they not flattering adjectives, but they are also dangerous traits if they were descriptions of a team within an organization. ... How many times have you been in a meeting and you have an idea or you disagree with what most participants agree on—but you don't say anything and you conform to the norms? Why don't you speak up? In general, there might be two reasons for it. The most likely one is the lack of psychological safety, the feeling that your idea would be shut down or ignored. Perhaps you worried you would ruin team harmony, or maybe you quickly started doubting yourself. That is one of the guaranteed ways of preventing progress and limiting growth. Psychological safety is the number-one trait of high-performing teams, according to Google’s Aristotle research. It is the ability to speak up without fearing any negative consequences, combined with the willingness to contribute, which leads us to the third factor.


The Heart of the Data Mesh Beats Real-Time with Apache Kafka

A data mesh enables flexibility through decentralization and best-of-breed data products. The heart of data sharing requires reliable real-time data at any scale between data producers and data consumers. Additionally, true decoupling between the decentralized data products is key to the success of the data mesh paradigm. Each domain must have access to shared data but also the ability to choose the right tool (i.e., technology, API, product, or SaaS) to solve its business problems. The de facto standard for data streaming is Apache Kafka. A cloud-native data streaming infrastructure that can link clusters with each other out-of-the-box enables building a modern data mesh. No Data Mesh will use just one technology or vendor. Learn from inspiring posts from your favorite data products vendors like AWS, Snowflake, Databricks, Confluent, and many more to successfully define and build your custom Data Mesh. Data Mesh is a journey, not a big bang. A data warehouse or data lake (or in modern days, a lakehouse) cannot be the only infrastructure for data mesh and data products. 


Microservice Architecture, Its Design Patterns And Considerations

Microservices architecture is one of the most useful architectures in the software industry. These can help in the creation of a lot of better software applications if followed properly. Here you’ll get to know what microservices architecture is, the design patterns necessary for its efficient implementation, and why and why not to use this architecture for your next software. ... Services in this pattern are easy to develop, test, deploy and maintain individually. Small teams are sufficient and responsible for each service, which reduces extensive communication and also makes things easier to manage. This allows teams to adopt different technology stacks, upgrading technology in existing services and scale, and change or deploy each service independently. ... Both microservices and monolithic services are architectural patterns that are used to develop software applications in order to serve the business requirements. They each have their own benefits, drawbacks, and challenges. On the one hand, when Monolithic Architectures serve as a Large-Scale system, this can make things difficult. 


Discussing Backend For Front-end

Mobile clients changed this approach. The display area of mobile clients is smaller: just smaller for tablets and much smaller for phones. A possible solution would be to return all data and let each client filter out the unnecessary ones. Unfortunately, phone clients also suffer from poorer bandwidth. Not every phone has 5G capabilities. Even if it was the case, it’s no use if it’s located in the middle of nowhere with the connection point providing H+ only. Hence, over-fetching is not an option. Each client requires a different subset of the data. With monoliths, it’s manageable to offer multiple endpoints depending on each client. One could design a web app with a specific layer at the forefront. Such a layer detects the client from which the request originated and filters out irrelevant data in the response. Over-fetching in the web app is not an issue. Nowadays, microservices are all the rage. Everybody and their neighbours want to implement a microservice architecture. Behind microservices lies the idea of two-pizzas teams. 


Temenos Benchmarks Show MongoDB Fit for Banking Work

In a mock scenario, Temenos’ research team created 100 million customers with 200 million accounts through pushed through 24,000 transactions and 74,000 MongoDB queries a second. Even with that considerable workload, the MongoDB database, running on an Amazon Web Services’ M80 instance, consistently kept response times under a millisecond, which is “exceptionally consistent,” Coleman said. (This translated into an overall response time for the end user’s app at around 18 milliseconds, which is pretty much so fast as to be unnoticeable). Coleman compared this test with an earlier one the company did in 2015, using all Oracle gear. He admits this is not a fair comparison, given the older generation of hardware. Still, the comparison is eye-opening. In that setup, an Oracle 32-core cluster was able to push out 7,200 transactions per second. In other words, a single MongoDB instance was able to do the work of 10 Oracle 32 core clusters, using much less power.


The role of the CPTO in creating meaningful digital products

It’s easy to see that product, design and more traditional business analysis have a lot of crossover, so where do you draw the line, if at all? This then poses the problem of scale. Having a CPTO over a team of 100-150 is fine, but if you scale that to the multiple disciplines over hundreds of people, it starts to become a genuine challenge. Again, having strong “heads of” protects you from this, but it forces the CxTO to become slightly more abstracted from reality. Creating autonomy (and thus decentralisation) in established product delivery teams helps make scaling less of a problem, but it requires both discipline and trust. Could you do this with both a CPO and a CTO? Yes, but by making it a person’s explicit role to harmonise the two concerns, you create a more concrete objective, and there is a need to resolve any conflict between the two worlds, centralising accountability. In reality, many CTOs are starting to pick up product thinking, and I’m sure the reverse is also true of CPOs. However, switching between two idioms is a big ask. It’s tiring (but rewarding), and could mean your focus can’t be as deep in either of the two camps.



Quote for the day:

"It is one thing to rouse the passion of a people, and quite another to lead them." -- Ron Suskind