Daily Tech Digest - November 24, 2021

The Importance of IT Security in Your Merger Acquisition

There is no question that cybersecurity risks and threats are growing exponentially. A report from Cybersecurity Ventures estimated a ransomware attack on businesses would happen every 11 seconds in 2021. Global ransomware costs in 2021 would exceed $20 billion. It seems there are constantly new reports of major ransomware attacks, costing victims millions of dollars. Earlier this year, the major ransomware attack on Colonial Pipeline resulted in disruptions that caused fuel shortages all over the East Coast of the United States. It helped to show that ransomware attacks on critical service companies can lead to real-world consequences and widespread disruption. This world of extreme cybersecurity risks serves as the backdrop for business acquisitions and mergers. A Garner report estimated that 60% of organizations who were involved in M&A activities consider cybersecurity as a critical factor in the overall process. In addition, some 73% of businesses surveyed said that a technology acquisition was the top priority for their M&A activity, and 62% agreed there was a significant cybersecurity risk by acquiring new companies.


The Language Interpretability Tool (LIT): Interactive Exploration and Analysis of NLP Models

LIT supports local explanations, including salience maps, attention, and rich visualizations of model predictions, as well as aggregate analysis including metrics, embedding spaces, and flexible slicing. It allows users to easily hop between visualizations to test local hypotheses and validate them over a dataset. LIT provides support for counterfactual generation, in which new data points can be added on the fly, and their effect on the model visualized immediately. Side-by-side comparison allows for two models, or two individual data points, to be visualized simultaneously. More details about LIT can be found in our system demonstration paper, which was presented at EMNLP 2020. ... In order to better address the broad range of users with different interests and priorities that we hope will use LIT, we’ve built the tool to be easily customizable and extensible from the start. Using LIT on a particular NLP model and dataset only requires writing a small bit of Python code. 


How software development will change in 2022

Local development environments are now largely the only part of the software development lifecycle time that is done locally on a developer’s computer. Automated builds, staging environments and running production applications have largely moved from local computers to the cloud. Microsoft and Amazon have both been working hard on addressing this challenge. In August this year, Microsoft released GitHub Codespaces to general availability. GitHub Codespaces offers full development environments that can be accessed using just a web browser that can start in seconds. The service allows technology teams who store their code in Microsoft’s GitHub service to develop using their Visual Studio Code editor fully in the cloud. Amazon also has its own solution to this problem, with AWS Cloud9 allowing developers to edit and run their code from the cloud. Startups have also been created to address this problem – in April, Gitpod announced it had raised $13m for its solution to move software development to the cloud. 


Microservices — The Letter and the Spirit

Ideally, services don’t interact with each other directly. Instead, they use some integration service to communicate together. This is commonly achieved with a service bus. Your goal here is making each service independent from other services so that each service has all what it needs to start the job and doesn’t care what happens after it completes this job. In the exceptional cases when a service calls another service directly, it must handle the situations when that second service fails. ... Microservices presents us with an interesting challenge – on the one hand, the services should be decoupled, yet on the other hand all should be healthy for the solution to perform well so they must evolve gracefully without breaking the solution. ... There are multiple ways to do versioning, any convention would do. I like the three digits semantic versioning 0.0.0 as it is widely understood by most developers and it is easy to tell what type of changes the service made by just looking at what digit of the three got updated. 


All Roads Lead To OpenVPN: Pwning Industrial Remote Access Clients

OpenVPN was written by James Yonan and is free software, available under the terms of the GNU General Public License version 2 (GPLv2). As a result, many different systems support OpenVPN. For example, DD-WRT, a Linux-based firmware used in wireless routers, includes a server for OpenVPN. Due to its popularity, ease of use, and features, many companies have chosen OpenVPN as part of their solution. It’s a feasible option for organizations that want to create a secure tunnel with a couple of new features. Rather than reinventing the wheel, the company will most likely use OpenVPN as its foundation. In the past year, due to the increased popularity and growing remote workforce, Claroty Team82 was busy researching VPN/remote-access solutions. The majority of them included OpenVPN as part of the secure remote access solution while the vendor application is a wrapper that manages the OpenVPN instance. After inspecting a couple of such products, we identified a key problem with the way these types of products harness OpenVPN—a problem that, in most cases, can lead to a remote code execution just by luring a victim to a malicious website.


More Stealthier Version of BrazKing Android Malware Spotted in the Wild

"It turns out that its developers have been working on making the malware more agile than before, moving its core overlay mechanism to pull fake overlay screens from the command-and-control (C2) server in real-time," IBM X-Force researcher Shahar Tavor noted in a technical deep dive published last week. "The malware […] allows the attacker to log keystrokes, extract the password, take over, initiate a transaction, and grab other transaction authorization details to complete it." The infection routine kicks off with a social engineering message that includes a link to an HTTPS website that warns prospective victims about security issues in their devices, while prompting an option to update the operating system to the latest version. ... BrazKing, like its predecessor, abuses accessibility permissions to perform overlay attacks on banking apps, but instead of retrieving a fake screen from a hardcoded URL and present it on top of the legitimate app, the process is now conducted on the server-side so that the list of targeted apps can be modified without making changes to the malware itself.


Common Cloud Misconfigurations Exploited in Minutes, Report

Unit 42 conducted the current cloud-misconfiguration study between July 2021 and August 2021, deploying 320 honeypots with even distributions of SSH, Samba, Postgres and RDP across four regions–North America (NA), Asia Pacific (APAC) and Europe (EU). Their research analyzed the time, frequency and origins of the attacks observed during that time in the infrastructure. To lure attackers, researchers intentionally configured a few accounts with weak credentials such as admin:admin, guest:guest, administrator:password, which granted limited access to the application in a sandboxed environment. They reset honeypots after a compromising event—i.e., when a threat actor successfully authenticated via one of the credentials and gained access to the application. ... The team analyzed attacks according to a variety of attack patterns, including: the time attackers took to discover and compromise a new service; the average time between two consecutive compromising events of a targeted application; the number of attacker IPs observed on a honeypot; and the number of days an attacker IP was observed.


Getting real about DEI means getting personal

Leaders also need to know themselves and their own biases. “We learn biases through the media, family, friends, and educators over time and often don’t realize that they’re causing harm,” Epler explained. She called out her own struggles with nonbinary gender pronouns. I can relate. When you grow up in a Dick-and-Jane world, it isn’t easy to switch pronouns and learn new ones that conflict with grammatical rules that have become baked into your DNA after decades of writing. If you aren’t aware of your biases, they are likely to manifest in microaggressions, if not something worse. “Microaggressions are everyday slights, insults, and negative verbal and nonverbal communications that, whether intentional or not, can make someone feel belittled, disrespected, unheard, unsafe, other, tokenized, gaslighted, impeded, and/or like they don’t belong,” writes Epler in her book. When leaders witness microaggressions, they must defend the people subjected to them.


IT hiring: 5 ways to attract talent amidst the Great Resignation

By now, perhaps your organization has its remote work environment down to a science. Ask yourself what resources you can promote to potential new hires that will instill confidence in their decision to move forward with your company. Especially for recent graduates just entering the workforce, a commitment to help them transition and build success from the start can help move the needle in your organization’s favor. Earlier this year, for example, social media software company Buffer found success by offering new hires $500 to set up their home office. According to one employee engagement blog, Buffer also offers its employees coworking space stipends and internet reimbursement. To increase engagement and productivity, consider what portion of your resources you can allocate to designing a premium onboarding experience for new hires. A strong career growth curve is a must-have for recent grads. Making your career advancement initiatives clear in the early stages of the recruiting process is a win-win for organizations and employees alike.


Report: China to Target Encrypted Data as Quantum Advances

The Booz Allen Hamilton researchers note that since approximately 2016, China has emerged as a major quantum-computing research and development center, backed by substantial policy support at the highest levels of its government. Still, the country's quantum experts have suggested that they remain behind the U.S. in several quantum categories - though China hopes to surpass the U.S. by the mid-2020s. While experts say this is unlikely, China may surpass Western nations in early use cases, the report states. Advancements in quantum simulations, the researchers contend, may expedite the discovery of new drugs, high-performance materials and fertilizers, among other key products. These are areas that align with the country's strategic economic plan, which historically parallels its economic espionage efforts. "In the 2020s, Chinese economic espionage will likely increasingly steal data that could be used to feed quantum simulations," researchers say, though they claim it is unlikely that Chinese computer scientists will be able to break current-generation encryption before 2030. 


Otomi: OSS Developer Self-Service for Kubernetes

The ultimate goal of developer self-service is to have less friction in the development process and ensure that developers can deliver customer value faster. This can be achieved by enabling the separation of concerns for both dev and ops teams. The ops team manages the stack and enforces governance and compliance to security policies and best practices. Dev teams can create new environments on-demand, create and expose services using best practices, use ready-made templatized options, and get direct access to all the tools they need for visibility. Think of it as paving the road towards fast delivery and minimizing risks by providing safeguards and standards. Developers can do what they need to do and do it when they like to. And yes, sometimes not always how they would like to do it. The only challenge here is, building a platform like this takes a lot of time and not all organizations have the resources to do so. The goal behind the Otomi open-source project was to offer a single deployable package that offers all of this out-of-the-box.



Quote for the day: 

"Leaders who won't own failures become failures." -- Orrin Woodward

Daily Tech Digest - November 23, 2021

How to investigate service provider trust chains in the cloud

Microsoft Detection and Response Team (DART) has been assisting multiple organizations around the world in investigating the impact of NOBELIUM’s activities. While we have already engaged directly with affected customers to assist with incident response related to NOBELIUM’s recent activity, our goal with this blog is to help you answer the common and fundamental questions: How do I determine if I am a victim? If I am a victim, what did the threat actor do? How can I regain control over my environment and make it more difficult for this threat actor to regain access to our environments? ... DAP can be beneficial for both the service provider and end customer because it allows a service provider to administer a downstream tenant using their own identities and security policies. ... Azure AOBO is similar in nature to DAP, albeit the access is scoped to Azure Resource Manager (ARM) role assignments on individual Azure subscriptions and resources, as well as Azure Key Vault access policies. Azure AOBO brings similar management benefits as DAP does.


Sharded Multi-Tenant Database using SQL Server Row-Level Security

The distribution of tenants among multiple servers can be made using different methods. An intuitive way would be like "put the first 10 tenants in this server A, then only when needed provision a new server B and put the next 10 tenants there, etc". Another method would be starting with a few servers and distributing tenants evenly across those servers: Let's say you have 3 servers called A, B, C, you'd put Tenant1 into A, Tenant2 into B, Tenant3 into C, Tenant4 into A again, Tenant5 into B, etc. So basically tenants are distributed according to (TenantId)%(NumberOfServers). If you don't want to have a single catalog (which as I said before is both a bottleneck and a single point of failure) you can spread your catalog across multiple servers (exactly like the tenants' data) as long as your requests can be routed directly to the right place, which would require the sharding to be based on something like the tenant domain. ... The Security Policy TenantAccessPolicy can be used to apply filters over any number of tables. To make sure that any table with the [TenantId] column will always be filtered, we can create a DDL trigger that will apply the security predicate to any new (or modified) table. 


How Nvidia aims to demystify zero trust security

Nvidia is succeeding at its mission of demystifying zero trust in datacenters, starting with its BlueField DPU architecture. Its architecture includes secure boot with hardware root-of-trust, secure firmware updates, and Cerberus compliant with more enhancements to support the build-out of its zero-trust framework. One of Nvidia’s core strengths is its ability to extend and scale DPU core features with SDKs and related software, while scaling to support larger AI and data science workloads. Doubling down on DOCA development this year, Nvidia used GTC 2021 to announce the 1.2 release supports new authentication, attestation, isolation, and monitoring features, further strengthening Nvidia’s zero-trust platform. In addition, Nvidia says they are seeing momentum in customers and partners signing up for the DOCA early access program. ... Morpheus monitors network activity using unsupervised machine learning algorithms to understand typical behavioral patterns, as well as identity, endpoint, and location parameters across multiple networks. 


Privacy vs. Security: What’s the Difference?

The difference between data privacy and data security comes down to who and what your data is being protected from. Security can be defined as protecting data from malicious threats, while privacy is more about using data responsibly. This is why you’ll see security measures designed around protecting against data breaches no matter who the unauthorized party is that’s trying to access that data. Privacy measures are more about managing sensitive information, making sure that the people with access to it only have it with the owner’s consent and are compliant with security measures to protect sensitive data once they have it. ... Using apps with end-to-end encryption is a good way to boost the security of your data online. Messaging services like Signal are encrypted end-to-end, meaning that no one but the sender and recipient of the message can view the data. That’s because the data is encrypted (or scrambled) before being sent, then decrypted only when it hits your device. One caveat here is to make sure the service you’re using is actually end-to-end encrypted. 


Five principles for navigating the post-pandemic era

The pandemic has permanently changed what it means to be “at work”. Work is no longer a place you go, but what you do. Hybrid working, and the ability to work from anywhere, is here to stay. A huge part of this shift has been facilitated by our capacity to invent new ways of working fit for the digital age. Video conferencing, the cloud, instant messaging: it’s all part of the same narrative – how technology can facilitate new behaviours and patterns that can benefit the workforce. Network-as-a-Service (NaaS), for example, is a secure, cost-effective subscription-based model that lets businesses of all sizes consume network infrastructure on-demand and as needed. Think of it like a thermostat, where you can increase or decrease temperature to suit your needs. With a solution like NaaS, businesses can ensure their employees have the same security and network connectivity at a coffee shop or at home, as they would in the office. This fundamentally changes what it means to be safe, secure and online – and employees can work from any location.


Enterprise Readiness For The Digital Age: Digital Fluency And Digital Resiliency

Digital fluency is the missing ingredient in many digital transformation efforts. In most cases, I would argue that it’s not the technology that’s holding an employee back but the lack of digital infrastructure, Culture, leadership, and skills, which are required to thrive alongside technologies. Digital literacy in the workforce can be tricky, especially for a large organization with thousands of employees. Companies must consider each employee’s age, background, educational qualification, and current digital literacy level. Although the challenges are beyond Diversity and Inclusion (D&I), it also includes resistance to change, Fear of Missing Out (FOMO), tracking the change management, continuous process of change, etc. To be successful, businesses will need to provide the right digital tools and training to the workforce, including leadership and cultural support to build Tech intensity, i.e., an organization’s ability to adapt and integrate the latest technology to develop its unique digital capability and trust factor.


Why cybersecurity training needs a post-pandemic overhaul

Unless you’re training tech workers, there really is no reason to overwhelm your learners with industry jargon. The average employee will struggle with an overly technical language and may end up missing the point of the training entirely while trying to memorize complicated terms. Cybersecurity training materials should be written in layman’s terms. An accessible training language is the first step in making any kind of training stick. Another unfortunate side-effect of relying too heavily on industry jargon during training is that makes the average employee unable to see how this training could relate to their daily job operations. When your training materials are abstract or don’t incorporate real-life scenarios, they can be more easily disregarded as something that employees probably won’t have to deal with. However, this could not be further from the truth, especially with the rise of remote and hybrid working. In fact, according to Tanium, 90% of companies faced an increase in cyberattacks due to COVID-19, making cybersecurity training more critical than ever.


Digital transformation: 4 IT leaders share how they fight change fatigue

Digital transformation can be a never-ending journey, but there are still key milestones and inflection points. Breaking the journey down this way helps keep the momentum going and allows time for reflection to make any course corrections. While it’s important to keep looking forward, don’t forget to look back and reflect on how far the organization has come and lessons learned along the way. Additionally, maintain an external perspective on where the competition is and how customer preferences may be changing. Keeping these stakeholders at the center of your plans helps keep everyone energized and focused. Create a culture of embracing change and uncertainty. Many large complex businesses have been focused on eliminating uncertainty and risk, but the digital transformation journey is not one of certainty and zero risk. Getting comfortable with that as a way of surviving and thriving will help transformation team members realize they are not swimming upstream, but with the current.


More Than Half of Indian Loan Apps Illegal, RBI Panel Finds

Some digital lending platforms exploit users' lack of financial awareness and charge them exorbitant interest rates, Rahul Pratap Yadav, chief business officer and strategy at digital payments firm iMoneyPay and former senior vice president at Yes Bank, tells ISMG. He adds that digital lenders ensnare other customers through multilevel marketing and by offering them referral bonuses. The lack of awareness on privacy and absence of regulatory mandates protecting user identity has also contributed to the list of challenges in the digital lending space, Yadav notes. He recommends that digital lenders "have the right checks and balances in the app, and educate borrowers on financial fraud and getting into bad debt because of financial irresponsibility." The Indian digital lending space is also home to several China-based actors, according to the working group. "Anyone that had access to money and can build an app is capable of becoming a digital lender," Sasi says. Many of these unregulated digital lending apps charge 10% to 15% monthly interest, making the lending market a lucrative business for companies trying to make a quick buck, he says.


Guarding against DCSync attacks

Step one is to implement basic security and hygiene practices for Active Directory. The attack requires the threat actor to have already compromised a domain administrator account or any other account that has been granted the DCsync permissions. As such, monitoring the permissions of your domain head is critical so that you are aware of which accounts or groups have been assigned the powerful DCSync permissions. You might find that you should revoke permissions for some users who had accidentally been granted them years ago. ... The second focus for enterprises should be on preventing lateral movement when attackers breach the network. Organizations should control access according to the principle of least privilege. Using a tiering model—where no domain account would ever log onto systems not involved in managing AD itself—will clearly make it harder for adversaries to elevate their privileges. Access rights must be regularly reviewed to ensure users do not have privileges they do not need for their duties. 



Quote for the day:

"People seldom improve when they have no other model but themselves." -- Oliver Goldsmith

Daily Tech Digest - November 22, 2021

Why you should choose Node.js for developing server-side applications

Node allows you to quickly develop an MVP. Node has already developed a large number of packages with various functions. You don't have to spend time developing the basic functionality, but just focus on the business logic. ... You don't have to reinvent the wheel on every project, which inevitably causes a lot of mistakes and makes the work boring, but you can work closely on tasks that are important for the project. Greater freedom in choosing an approach, building an architecture, and finalizing standard functionality that does not meet the requirements of the architect and/or customer. Node is built on the basis of the JavaScript language. As a result, this significantly increases the likelihood of developing full-stack specialists in the development team: front-enders who are well versed in the backend, or backenders who are well versed in the frontend. ... If you had to work closely with the front-end before, then you have a good understanding of the processes that occur with data in the user part of the resource, and, as a result, a simpler dialogue with front-end users. A good full-stack specialist is often more valued in the market than a good backend or frontend developer


How autonomous automation is the future

An APM solution measures application performance to identify potential bottlenecks, ideally already before deployment, e.g., in a test phase. Most of them do that fine. Some of them even have predictive capabilities in a sense that they can warn before a critical situation arises. Even better, an APM system that predicts that excessive runtimes will arise for operations in given circumstances before a significant usability incident occurs would be incredibly useful. The scenario above was caused by a high number of calls between the systems in combination with a high latency due to the long distance routing. Situations like this one can be identified and therefore solved before they occur. Scoring of opportunities already helps salespersons, now doubt. Still, additionally offering the actions that could be performed to improve the likelihood for successfully closing opportunities is far more helpful. Or in the case of the ticket routing based upon (bad) sentiment: suggesting which actions could be taken to improve the situation would help many agents.


The Daily Scrum event: 5 surprisingly common misconceptions

The purpose of the Daily Scrum is to improve the likelihood of delivering an increment that meets the Sprint Goal. According to the 2020 Scrum Guide, “Daily Scrums improve communications, identify impediments, promote quick decision-making, and consequently eliminate the need for other meetings.” These benefits won’t come through an email thread or even a group chat. The Daily Scrum is not a status report; it’s a collaborative event that involves discussion and group decision-making. If your team can replace its daily Scrum with an email, you might not be practicing the Daily Scrum in the right way. ... One of the most popular formats that Developers use for the Daily Scrum involves each Developer sharing in turn what they did yesterday to help the team meet the Sprint Goal, what they will do today to help the team meet the Sprint Goal, and whether they have any impediments. It can be a useful structure, but if the team follows it robitcally, they might miss the POINT of the Daily Scrum. At the Daily Scrum, the Developers should be inspecting progress towards the Sprint Goal together and talking about how it's going. This isn’t just a burn-up chart, or three questions to tick off.


Mastercard opens fintech, cybersecurity innovation lab in Beersheba

The lab has a number of focus areas including API (Application Programming Interface) security, vulnerability management, ransomware, digital identity and authentication, virtual payment wallets, and fraud prevention, said FinSec Innovation Lab CEO Sidney Gottesman. “Anything that Mastercard or Enel can use to protect themselves or to provide to their respective customers,” Gottesman told The Times of Israel on Sunday. Gottesman is senior vice president of Corporate Security at Mastercard, and the program owner for employee identity and access management. He previously held senior roles at Bank Leumi USA and Citigroup. The FinSec Innovation Lab is currently working with five Israeli startups to help them develop and test their solutions at the new center in Beersheba, offering the physical space, services, mentorship, and real-world data with which the companies can perform simulations of complex financial processes and cyber product testing, said Gottesman. The startups will also be able to receive funding from the Israel Innovation Authority.


Want to Stand Out as a Data Scientist? Start Practicing These Non-Technical Skills

Having good communication skills is not unique to just data science but quite frankly, any other professional work environment. Communication in the workplace can be broken down into two types: verbal communication and written communication. Without diving into too much detail, the main differences between the two are the speed of transmission and proof of record. Verbal communication has a high speed of transmission but no proof of record. As a result, it is more often used in internal stand-ups or client meetings where you are simply providing an update to the team or need immediate feedback on a particular idea. Written communication, on the other hand, has a low speed of transmission but offers proof of record. This can be in the form of emails, Slack messages or even comments in your code. Having good communication skills can go a long way when working in a collaborative environment. Not only does it make or break a team’s efficiency but it also helps with persuading others to pursue an idea during a project.


AI-driven adaptive protection against human-operated ransomware

The adaptive protection feature works on top of the existing robust cloud protection, which defends against threats through different next-generation technologies. Compared to the existing cloud protection level feature, which relies on admins to manually adjust the cloud protection level, the adaptive protection is smarter and faster. It can, when queried by a device, automatically ramp the aggressiveness of cloud-delivered blocking verdicts up or down based on real-time machine learning predictions, thus proactively protecting the device. We can see the AI-driven adaptive protection in action in a case where the system blocked a certain file. Before the occurrence of this file on the device, there were suspicious behaviors observed on the device such as system code injection and task scheduling. These signals, among others, were all taken into consideration by the AI-driven adaptive protection’s intelligent cloud classifiers, and when the device was predicted as “at risk,” the cloud blocking aggressiveness was instantly ramped up. 


Mastering the connection between strategy and culture

Ideally, you develop strategy and organizational culture together in a connected, integrated approach from the beginning. You experiment, learn, and iterate as you align your strategic direction with the behaviors that will help you get there. Indeed, in a recent online poll I conducted with 300 executives, 56% said that they used this approach; 30% said strategy came first. Sometimes it is necessary to put more emphasis on one aspect before connecting the two. A toxic workplace, an ethical issue, or a poor relationship with a supplier may require immediate attention to the culture, especially if important stakeholders, such as investors or regulators, express their concerns. This requires interrogating the causes, taking remedial action, dealing with the immediate impacts, and starting to build new ways of thinking and working. In this scenario, developing a new strategy might need to wait until there is sufficient cultural progress. At other times, the strength of competitor activity (e.g., in launching new products or services, or in pursuing aggressive pricing policies) or the dynamics of customer behavior may require a company to make strategic choices before working on the evolution of the culture.


A Design Thinking Roadmap for Process Improvement and Organizational Change

Structured interviews were the first step to engage people in the organization into the change initiative. Each interview session lasted two hours. I engaged the members of the management team and learned their perspective on the proposal process for the various work roles they performed. Questions were tailored to the organization and the proposal process, and addressed the following five categories: stakeholder role, problem discovery (i.e. pains), problem validation, opportunity discovery (i.e. gains), and opportunity validation. The structured interviews provided a safe environment for people to share their candid opinion on the process and potential ideas for improvement, and provided an opportunity to build trust with the key stakeholders in the organization. During the observation phase of the change effort, I used the interviews as a way to get to know the people in the organization, learn about the process, and the problems and potential solutions for improvement. 

As the business begins to refocus and reformulate business strategies, CIOs will need to reformulate IT strategies. The direction your business heads may be very different from where it was headed pre-2020, so it’s critical to remain well-informed about business objectives. ... While looking at strategic alignment, be sure to evaluate which IT initiatives are meant to run the business and which are intended to improve the business. If you don’t address basic IT operations, other efforts will be wasted. If you focus solely on IT operations, you’ll be seen as a utility and not an innovative business partner. Running the IT business means delivering business value. This includes improving cost, speed, and capabilities as well as reducing technical debt by decommissioning technology and deselecting projects. Shaping the business involves investments in innovation and enabling the business to respond quickly to new opportunities using IT. These efforts are future-facing. Any IT department that is not actively moving forward is actively falling behind.


Global AI regulation? Possibly, and it’s starting in the EU

Before we can delve into the impact of the proposed legislation, it’s helpful to understand what the Act defines as AI. At this early stage of this draft Act, the current definition is pretty broad and includes machine learning approaches, logic- and knowledge-based approaches, expert systems, statistical approaches, Bayesian estimation, other statistical methods, and search and optimisation methods. It then stands to reason that all modeling performed by augmented analytics and data science will also fall under the Act. In addition, not dissimilar to GDPR, it won’t matter if your company falls outside the EU’s borders. If you have a link to the EU, have customers, suppliers, staff in the EU, or even produce products for the EU, then you will have to comply. Non-compliance will include financial penalties for an organisation, and if you look closely at the draft, it speaks about fines of up to €30 million or 6% of your global annual turnover. For those companies looking to take a shortcut, don’t — the Act clearly states that if you provide the regulatory bodies with incorrect, incomplete, or misleading information, fines of €10 million or 2% of turnover will be instated.



Quote for the day:

"It is the capacity to develop and improve their skills that distinguishes leaders from followers." -- Warren G. Bennis

Daily Tech Digest - November 21, 2021

Ransomware Phishing Emails Sneak Through SEGs

The original email purported to need support for a “DWG following Supplies List,” which is supposedly hyperlinked to a Google Drive URL. The URL is actually an infection link, which downloaded an .MHT file. “.MHT file extensions are commonly used by web browsers as a webpage archive,” Cofense researchers explained. “After opening the file the target is presented with a blurred out and apparently stamped form, but the threat actor is using the .MHT file to reach out to the malware payload.” That payload comes in the form of a downloaded .RAR file, which in turn contains an .EXE file. “The executable is a DotNETLoader that uses VBS scripts to drop and run the MIRCOP ransomware in memory,” according to the analysis. ... “Its opening lure is business-themed, making use of a service – such as Google Drive – that enterprises employ for delivering files,” the researchers explained. “The rapid deployment from the MHT payload to final encryption shows that this group is not concerned with being sneaky. Since the delivery of this ransomware is so simple, it is especially worrying that this email found its way into the inbox of an environment using a SEG.”


How Decentralized Finance Will Impact Business Financial Services

In essence, DeFi aims to provide a worldwide, decentralized alternative to every financial service now available, such as insurance, savings, and loans. DeFi’s primary goal is to offer financial services to the world’s 1.7 billion unbanked individuals. And this is possible because DeFi is borderless. These financial services are available to anybody with a smartphone and internet connection in any part of the world. For the impoverished and unbanked, this will revolutionize banking. They can invest anywhere in the world in anything with just the touch of a button. By providing open access for all, DeFi empowers individuals and businesses to maintain greater control over their assets and gives them the financial freedom to select how to invest their money without relying on any intermediary. DeFi is also censorship-resistant, making it immune from government intervention. Furthermore, sending money across borders is extremely costly under the existing system. DeFi eliminates the need for costly intermediaries, allowing for better interest rates and lower expenses, while also democratizing banking systems.


Addressing the Low-Code Security Elephant in the Room

What are some development choices about the application layer that affect the security responsibility? If the low-code application is strictly made up of low-code platform native capabilities or services, you only have to worry about the basics. That includes application design and business logic flaws, securing your data in transit and at rest, security misconfigurations, authentication, authorizing and adhering to the principle of least-privilege, providing security training for your citizen developers, and maintaining a secure deployment environment. These are the same elements any developer — low-code or traditional — would need to think about in order to secure the application. Everything else is handled by the low-code platform itself. That is as basic as it gets. But what if you are making use of additional widgets, components, or connectors provided by the low-code platform? Those components — and the code used to build them — are definitely out of your jurisdiction of responsibility. You may need to consider how they are configured or used in your application, though.


Google Introduces ClusterFuzzLite Security Tool for CI/CD

ClusterFuzzLite enables you to run continuous fuzzing on your Continuous integration and delivery (CI/CD) pipeline. The result? You’ll find vulnerabilities more easily and faster than ever before. This is vital. A 2020 GitLab DevSecOps survey found that, while 81% of developers believed fuzz testing is important, only 36% were actually using fuzzing. Why? Because it was too much trouble to set fuzzing up and integrate it with their CI/CD systems. At the same time, though, as Shuah Khan, kernel maintainer and the Linux Foundation’s third Linux Fellow, has pointed out “It is easier to detect and fix problems during the development process,” than it is to wait for manual testing or quality assurance later in the game. By feeding unexpected or random data into a program, fuzzing catches bugs that would otherwise slip past the most careful eyeballs. NIST’s guidelines for software verification specify fuzzing as a minimum standard requirement for code verification. After all as Dan Lorenc, founder and CEO of Chainguard and former Google open source security team software engineer, recently told The New Stack, 


Bitcoin Is How We Really Build A New Financial System

When it comes to a foundational sound money, Bitcoin is unmatched. Compared to other blockchain assets, Bitcoin has had an immaculate conception.Also, Bitcoin has an elegantly simple monetary policy and an immutable supply freed from human discretion – something no other cryptocurrency asset can provide. Bitcoin's monetary policy is based on algorithmically-determined parameters and is thus perfectly predictable, rule-based and neither event- nor emotion-driven. By depoliticizing monetary policy and entrusting money creation to the market according to rule-based parameters, Bitcoin’s monetary asset behaves as neutrally as possible. Bitcoin is truly sound money since it provides the highest degree of stability, reliability and security. Most crypto enthusiasts would probably object that while Bitcoin might be the soundest money, its technical capabilities do not allow for DeFi to be built on top of it. As a matter of fact though, nothing could be further from the truth. 


A Simple 5-Step Framework to Minimize the Risk of a Data Breach

The first step businesses need to take to increase the security of their customer data is to review what types of data they're collecting and why. Most companies that undertake this exercise end up surprised by what they find. That's because, over time, the volume and variety of customer information that gets collected to expand well beyond a business's original intent. For example, it's fairly standard to collect things like a customer's name and email address. And if that's all a business has on file, they won't be an attractive target to an attacker. But if the business has a cloud call center or any type of high touch sales cycle or customer support it probably collects home addresses, financial data, and demographic information, they've then assembled a collection that's perfect for enabling identity theft if the data got out into the wild. So, when evaluating each collected data point to determine its value, businesses should ask themselves: what critical business function does this data facilitate. If the answer is none, they should purge the data and stop collecting it. 


To Monitor or Not to Monitor a Model — Is there a question?

Evidently AI works by analyzing the training and production datasets. It maps the data from features in the training data to their counterparts in the production data. ... Thereafter it runs different statistical tests depending on the input. Evidently AI then creates graphs that are based on the plotly python library, and you can read more about the code in their open-source GitHub repository. For binary categorical features, it performs a simple Z-test for a difference in proportions to verify if there is a statistically significant difference in how often the training and production data have one of the two values for the binary variable. For multivariate categorical features, it performs a chi-squared test, which aims to see if the distribution of the variable in the production data is likely based on the distribution in the training data. Finally, for numeric features, it performs a two-sample Kolmogorov-Smirnov test for goodness of fit that assesses the distributions of the feature in the training and production data to see if they are likely to be the same distribution, or if they vary from each other significantly.


IBM’s latest quantum chip breaks the elusive 100-qubit barrier

The Eagle is a quantum processor that is around the size of a quarter. Unlike regular computer chips, which encode information as 0 or 1 bits, quantum computers can represent information in something called qubits, which can have a value of 0, 1, or both at the same time due to a unique property called superposition. By holding over 100 qubits in a single chip, IBM says that Eagle could increase the “memory space required to execute algorithms,” which would in theory help quantum computers take on more complex problems. “People have been excited about the prospects of quantum computers for many decades because we have understood that there are algorithms or procedures you can run on these machines that you can’t run on conventional or classical computers,” says David Gosset, an associate professor at the University of Waterloo’s Institute for Quantum Computing who works on research with IBM, “which can accelerate the solution of certain, specific problems.”


Industrial computer vision is getting ready for growth

Industrial applications, however, present some unique challenges for computer vision systems. Many organizations can’t use pretrained machine learning models that have been tuned to publicly available data. They need models that are trained on their specific data. Sometimes, those organizations don’t have enough data to train their ML models from scratch, so they need to go through some more complicated processes, such as pretraining the model on a general dataset and then finetuning it on their own labeled images. The challenges of industrial computer vision are not limited to data. Sometimes, sensitivities such as safety or transparency impose special requirements on the type of algorithm and accuracy metrics used in industrial computer vision systems. And the team running the model needs an entire MLOps stack to monitor model performance, iterate across models, maintain different versions of the models, and manage a pipeline for gathering new data and retraining the models.


Three Big Myths About Decentralized Finance

Because the blockchain uses so many distinct sources to verify and record what happens within the system, there is also a common misconception that decentralized finance is inherently safer than centralized systems run by a single financial institution. After all, if thousands of sources check my transactions, won't they be able to identify and prevent anyone trying to use my account without my permission? Not necessarily. While it's true that the blockchain does help to safeguard against administrative or accounting errors — as happened recently with one family who mistakenly received $50 billion in their account — it also removes the safeguards that centralized financial businesses provide. Most of today's largest financial institutions have been around for decades. Over the years, federal and industry regulation have been put in place to provide safeguards against fraud. Navigating these safeguards can no doubt be tiresome, but they do provide valuable protections.




Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman

Daily Tech Digest - November 20, 2021

How midsize companies are vulnerable to data breaches and other cyberattacks

Since almost the start of 2020, businesses have increasingly turned to remote work, grown the number of devices connecting to their networks, and expanded their use of the cloud. In reaction, more cybercriminals have stretched their repertoire to include ransomware attacks via the cloud and email, endpoint malware, Wi-Fi phishing and insider threats. The security industry also has a tendency to focus on the enterprise market with expensive and expansive products, thus sometimes neglecting mid-market companies. Plus, the security products used by smaller businesses are often misconfigured. ... But less than 1% of midsize companies have Wi-Fi phishing protection in place, while 90% of the ones that do have misconfigured them. In this type of environment, midsize companies are vulnerable because many lack the required security teams, the in-house expertise or the advanced and expensive security tools needed to defend themselves. As a result, many such businesses are unable to properly safeguard the company.


Why Sabre is betting against multi-cloud

“We were able to start making that journey, but we had not done a lot in the cloud before. So, originally, we went on path where we made deals with all the major cloud providers. We were thinking: well, let’s go down the multi-cloud path until we see if there’s something better, because, quite frankly, we didn’t know what we didn’t know and we didn’t want to make the mistake of committing to anybody too early.” And while the leadership was behind the move, DiFonzo felt like he had to get an early win, in part because the company’s customers also had to be on board. For that, the team picked one of Sabre’s most important and most CPU-intensive services: its shopping application, which previously ran in its own Tulsa, Oklahoma data center. That’s where a lot of revenue for Sabre’s customers is generated, so by being able to show improved performance and stability for this service, the team was able to build up credibility for the rest of its roadmap. But by mid-2018, the team realized that using multiple clouds became a limiting factor. 


How Artificial Intelligence is Improving Cloud Computing?

Embedding artificial intelligence into the cloud framework helps improve data management by automating redundant tasks, identifying, sorting, and indexing different types of data, managing data transactions on the cloud, identifying any faults in the cloud infrastructure, and ultimately streamlining the whole data management process. With the advent of the SaaS model, it became possible to host not just data but also complex software applications and even entire virtual computers on the cloud which users could access and use as per their requirements. To improve the cloud computing experience further, SaaS application developers began to fuse AI with their applications and the result was the availability of powerful SaaS software applications that were empowered by AI and ready to offer greater functionality to end-users. A popular example is Salesforce Einstein, Salesforce’s intelligent solution for businesses that use the power of artificial intelligence for predictive analytics, deeper data insights, comprehensive user-behavior reports, and much more, thereby providing data that businesses can use to formulate their sales strategies and increase their profits.


CQRS pattern

Having separate query and update models simplifies the design and implementation. However, one disadvantage is that CQRS code can't automatically be generated from a database schema using scaffolding mechanisms such as O/RM tools. For greater isolation, you can physically separate the read data from the write data. In that case, the read database can use its own data schema that is optimized for queries. For example, it can store a materialized view of the data, in order to avoid complex joins or complex O/RM mappings. It might even use a different type of data store. For example, the write database might be relational, while the read database is a document database. If separate read and write databases are used, they must be kept in sync. Typically this is accomplished by having the write model publish an event whenever it updates the database. For more information about using events, see Event-driven architecture style. Updating the database and publishing the event must occur in a single transaction.


Top lesson from SolarWinds attack: Rethink identity security

“You’ve got to maintain that [identity] infrastructure. You’ve got to know when it’s been compromised, and when somebody has already got your credentials or is stealing your tokens and presenting them as real,” he said. Digital identity management is notoriously difficult for enterprises, with many suffering from identity sprawl—including human, machine, and application identities (such as in robotic process automation). A recent study commissioned by identity security vendor One Identity revealed that nearly all organizations — 95% — report challenges in digital identity management. The SolarWinds attackers took advantage of this vulnerability around identity management. During a session with the full Gartner conference on Thursday, Firstbrook said that the attackers were in fact “primarily focused on attacking the identity infrastructure” during the SolarWinds campaign. Other techniques that were deployed by the attackers included theft of passwords that enabled them to elevate their privileges (known as kerberoasting); theft of SAML certificates to enable identity authentication by cloud services; and creation of new accounts on the Active Directory server, according to Firstbrook.


3 reasons devops must integrate agile and ITSM tools

Devsecops practices alone don’t cement the collaboration required to bring development, operations, and security functions together to meet these objectives. It requires implementing, tracking, and measuring workflows that span these functions. For many organizations, these workflows bring together agile methodologies used by development teams, including scrum and kanban, with IT service management (ITSM) practices managed by ops, including request management, incident management, problem management, change management, and maintaining a configuration management database (CMDB). Yet, many IT organizations fail to integrate their agile and ITSM tools. The development teams might be using Azure DevOps, Digital.ai, Jira Software, or another agile tool to manage backlogs of user stories, sprints, and releases in the development process. Independently, ops may be using BMC, Cherwell, Ivanti, Jira Service Management, Micro Focus, ServiceNow, or another ITSM tool to manage tickets, track systems, and oversee change management.


An Important Skill for Data Scientists and Machine Learning Practitioners

Data Science as a discipline and profession demands its practitioners possess various skills, ranging from soft skills such as communication, leadership to hard skills such as deductive reasoning, algorithmic thinking, programming, and so on. But there’s a crucial skill that should be attained by Data Scientists, irrespective of their experience, and that is writing. Even Data Scientists working in technical fields such as quantum computing, or healthcare research need to write. It takes time to develop strong writing ability, and there are challenges that Data Scientists confront that might prevent them from expressing their thoughts easily. ... Many experts believe that blogs and articles have a unique role in the machine learning community. Articles are how professionals stay up to date on software releases, learn new methods, and communicate ideas. Technical and non-technical ML articles are the two most frequent sorts of articles you’ll encounter. 


What the hell is the difference between a data analyst and a data scientist?

Just like data analysts, data scientists work towards answering a particular business question that requires data-driven insight. However, data scientists are primarily concerned about estimating unknowns, using algorithms and statistical models to answer these questions. As a result, a key difference is the extent of coding employed in data scientist roles. In this respect, data science roles can be challenging as they require a blend of technical skills and an understanding of business problems in context. A data scientist will often find himself or herself trying out different algorithms to solve a particular problem and might even have to be familiar with pipeline automation. Data scientists also get their hands dirty with much larger sets of data than analysts do and are thus required to possess the skills to explore and model huge amounts of unstructured data, often in a parallel fashion using languages like Scala. Many data scientists eventually realize that a big part of their job involves just cleaning and processing raw data from a multitude of sources and making sure that this process can be replicated for actual deployment and prediction.


Building multichain is a new necessity for DeFi products

This trend is fueled, in part, by the Polkadot and Kusama ecosystem that was built with a multichain philosophy at its core. Parachains connected to the relay chain easily communicate with one another, raising the bar even higher for the entire space. With the second set of parachain slot auctions just around the corner, they continue to set the standard for the multichain industry. Projects that make it easier for the average user to connect more systems — such as the Moonbeam protocol and the Phantom wallet — are raising millions of dollars to simplify this new multichain reality for users. But how do you navigate this as a developer? We can see clearly that the market is shaped by user demands. Depending on their needs, your users are turning to blockchains that better serve them — and to the platforms that offer access to them. As a result, projects that support multiple chains gain larger audiences and more liquidity. This means that at a minimum, your DeFi product needs to support Ethereum and a “niche” blockchain — there are established leaders for trading, staking, nonfungible tokens (NFTs) and more. And the more chains with which you can interact, the better.


The 5 Biggest Blockchain Trends In 2022

Blockchain is hugely compatible with the idea of the Internet of Things (IoT) because it is great for creating records of interactions and transactions between machines. It can potentially help to solve many problems around security as well as scalability due to the automated, encrypted, and immutable nature of blockchain ledgers and databases. It could even be used for machine-to-machine transactions – enabling micropayments to be made via cryptocurrencies when one machine or network needs to procure services from another. While this is an advanced use case that may involve us traveling a little further down the road before it impacts our day-to-day lives, it’s likely we will start to hear about more pilot projects and initial use cases in this field during 2022. Innovation in this field is likely to be driven by the ongoing rollout of 5G networks, meaning greater connectivity between all manner of smart, networked equipment and appliances – not simply in terms of speed, but also new types of data transactions including blockchain transactions.



Quote for the day:

"Good leadership consists of showing average people how to do the work of superior people." -- John D. Rockefeller

Daily Tech Digest - November 19, 2021

The Old Ways Aren’t Working: Let’s Rethink OT Security

Traditionally, OT systems were not connected to the Internet, but that has been changing in recent years as organizations have focused on making OT more efficient, safer, and cost-effective. “One of the ways to do that is to start using IT and connect OT to the Internet,” Masson says. The world of IT has the Internet of Things (IoT). The equivalent in the world of critical infrastructure – the sensors used in manufacturing facilities and out in the field – is the industrial Internet of Things (IIoT). While IT/OT convergence has significant benefits, such as the ability to monitor and manage OT remotely and collect information from sensors located in remote locations, it also introduced threats from the IT world that had never existed before in OT networks, Masson says. ... That is no longer the case. Cybercriminal gangs have figured out that they can make money out of targeting critical infrastructure. While some criminal gangs may be possibly acting on the behalf of nation-states, many are also flowing some of the ransom money “back into their own R&D,” Masson says. The convergence of IT and OT has made it possible for these criminal gangs to adapt their IT-based attacks to target critical infrastructure providers.


How to improve your SaaS security posture and reduce risk

Adaptive Shield’s SaaS Security Posture Management (SSPM) provides proactive, continuous and automated monitoring of any SaaS application, alongside a built-in knowledge base of compliance standards and benchmarks to ensure the highest level of SaaS security available today. As a SaaS offering that integrates with SaaS, the solution can be live in minutes. Once in place, it provides customers with clear visibility into their whole SaaS ecosystem where it can detect any misconfiguration, incorrect permissions, and all possible exposure, wherever they may be. Through its automated remediation capabilities, the solution sends detailed alerts at the first sign of a security misconfiguration. This allows the security team to quickly open a ticket to fix the issue with no go-between and no lengthy additional steps. ... It’s a common occurrence – that “wow” moment when the client sees their SaaS security posture for the first time on Adaptive Shield. They are able right away to glean the potential places for breach or leak and are excited for the map of how to fix it.


Fixing the blind spots in your digital transformation efforts

There’s often a disconnect between what your customers say they want to do and what they actually do. That’s why it is critical to have visibility into your customers’ product journeys. For example, what actions in the product lead to a repeat user? Where are your biggest drop-off rates? Where are users stalling in the purchase process? You can use these insights to optimise your digital product. Facebook famously discovered that the key to great user engagement was adding seven friends in the first 10 days of signing up. The company re-designed its product experience around this insight, and we all know that turned out to be a success. But the tricky part is getting your hands on this product data – the sheer number of data points needed to join, analyse, and correlate customer actions to outcomes makes this incredibly complicated. Companies have tried (and failed) to use web and marketing analytics tools to pull this off, but these products weren’t built for the scale and complexity of today’s digital products. Instead, teams need to utilise product-specific tools that leverage machine learning and offer real-time insights.


Leading With Empathy

Frоm a global реrѕресtіvе, empathy іѕ infinitely important раrtісulаrlу іf іt ends іn соmраѕѕіоn. Emраthу motivates people tо step іn and hеlр those who have bееn struck by major disasters even іf they аrе tоtаl strangers. Empathy brings out the best in us and improves the global quality of life. There is a dire need for collaboration, compassion, kindness, and empathy in these challenging times. Empathy is the ability to emotionally understand what other people feel, see things from their perspective, and imagine yourself in their place. It is a skill and not a trait. One’s upbringing, environment, life experiences, and interactions with other empathic people strongly influence empathy. Empathy is a scarce resource in our organizations and communities today. Contrary to what people believe, you do not need permission to lead with empathy. Anyone can be an empathic leader. Your actions to improve someone’s quality of life in adversity are what make you an empathic leader. Empathic leaders are in short supply in the workforce as well. The stereotype of a workforce leader has been military in nature with no leeway for human emotions. 


CRISP: Critical Path Analysis for Microservice Architectures

At Uber, most services are Jaeger enabled and hence we are able to collect a reliable dependency graph of RPC calls for a given request. However, the amount of data would be prohibitive if all requests were traced. Hence, we employ a sampling strategy to collect a fraction of requests. If a request is tagged for Jaeger monitoring at the system’s entry point, all its subsequent downstream calls made on behalf of the request are collected into a single trace. We store Jaeger traces in different data stores with different shelf lives. We refer the reader to the Jaeger Uber engineering blog and the open-source code base for further details about Jaeger. Unfortunately, in a complex microservice environment, the Jaeger traces are hard to digest via visual inspection. Even a single trace can be very complicated as shown in the call graph and timeline views in Figure 2 below, which are taken from a real-world microservice interaction at Uber. This motivates the need for tooling to analyze and summarize the traces into actionable tasks for developers.


The Mindset of an Impactful Component Team in Agile

Developing a solution that doesn't exist or which needs to be modified to fit into the layers of architecture is a huge responsibility. In the beginning, the right solution might look like a far-away dream for a number of reasons such as time taken in selection and availability of tools, initial prototype failures, lack of ideas, solution stuck on a unique point which requires significant exploration or help from open sources adding to the delay, infrastructure issues, etc. Successful component teams I have seen, don’t get carried away from the situational setbacks; they understand the inherent challenges in the technology they work with and remain determined to get the job done. They take the challenges on daily basis, exhibit perseverance, possess a never say die attitude, are open for discussions, and reach out to people they need help from. Leadership support, sessions by agile coaches, and grooming by experienced SMEs play a good role in helping the teams develop this mindset which assures desired outcome in the long run.


How to Build a Security Awareness Training Program that Yields Measurable Results

Employees represent security risks mainly because they are unaware of how their actions and decisions cause security incidents. To address this cause, enterprises undertake extensive security awareness training efforts to help employees know what they should and shouldn't do when working digitally. The mere act of exposing employees to security training is not enough; a program is not effective unless it produces results in building real skills that change employee behavior and empower them to make the right choice in the face of a cyberattack. To achieve this, companies must select a security awareness training that is data-driven, adaptive per employee location, takes into account role and behavior towards cyber training, is continuous and high-frequency, and engages each employee at least once a month. Some of the key features organizations should be looking for in a security awareness program can be divided into the following. The more employees are exposed to real-life phishing emails and other security risks, the more likely they are to succeed in protecting the organization and assets against phishing, malware, and many other threats. 


How Automation is Changing Entry-Level Career Paths in IT

“It is important to realize that AI and automation won’t be replacing IT workers,” says venture advisor and investor Frank Fanzilli. “These technologies simply enable an IT worker to effectively manage ever more complex and rapidly changing systems.” Fanzilli says with automation on the way to becoming an entirely new discipline within IT -- one that will radically change how IT work is delivered -- it makes it a great opportunity for entry-level IT workers to exploit the skill gap and become the next generation of IT leaders. He says entry-level engineers should make sure they understand transformative automation technologies such as robotic process automation and digital platform conductors and then build a career path that leverages these technologies to drive ever greater business value. “You’re already seeing this happen with the rapid adoption of platforms such as UIPath and ReadyWorks and the effect these human/automation interfaces are having in driving down costs and improving overall quality,” Fanzilli notes.


Tackling the root of the public sector’s cyber security problem

Many governmental organisations rely on outdated systems, choosing to retain platforms that are increasingly frustrating to use. Budgetary constraints and responsibility of public money can lead the public sector to veto new technology investments in favour of a ‘if it ain’t broke’ mentality. Of course, stringing along outdated systems is a false economy. Built in a different era for different demands, legacy IT impedes the work of individuals, teams or entire organisations and often requires a complex estate of specialised and tailored legacy applications. Over time, these outdated ecosystems become more expensive to support, patch and update, consuming up to 50% of annual IT budgets, in the case of the UK government itself. On the flipside, newer systems, applications and platforms open a wealth of benefits, from bottom-line financial improvements, efficiency gains, or even the positivity of a much better user experience. The problem is, the longer outdated technology is in place, the more difficult it is to replace. Rewriting those applications from scratch to ensure compatibility with modern platforms can be expensive and time consuming. 


Faster Financial Software Development Using Low Code: Focusing on the 4 Key Metrics

To that end, low-code/no-code platforms are rapidly accelerating the capabilities of the enterprise to develop robust, bespoke applications with speed and security as part of their remit to clients. Examples of these range from the extremely targeted Genesis, a low-code/no-code platform built specifically for financial markets to the “one-size-fits-all” Appian, a general purpose low-code/no-code platform used to build many enterprise applications. With low code and no code, “citizen” developers are empowered to build applications and help unclog always-under-pressure IT departments. Achieving speed, stability, and availability in software development is possible; in fact, these are all complementary outcomes. In this article, I’ll share actionable tips to achieve an effective pace of software development, as defined by the 4 key performance metrics described in Accelerate, a book by Dr. Nicole Forsgren et al., and with current industry data from the 2021 State of CD report from the Continuous Delivery Foundation.



Quote for the day:

"The person who sees the difficulties so clearly that he does not discern the possibilities cannot inspire a vision in others." - J. Oswald Sanders