Daily Tech Digest - October 07, 2021

Encryption: Why security threats coast under the radar

This application of AI became a valuable source IT expertise that multiplied staff bandwidth to manage the solution and allowed for a full and complex monitoring of the entire networked environment. With Flowmon ADS in place, the institute has a comprehensive, yet noise-free overview of suspicious behaviours in the partner networks, flawless detection capability, and a platform for the validation of indicators of compromise. Flowmon’s solution works at scale too. GÉANT – which is a pan-European data network for the research and education community – is one of the world’s largest data networks, and transfers over 1,000 terabytes of data per day over the GÉANT IP backbone. For something of that scale there is simply no way to manually monitor the entire network for aberrant data. With a redundant application of two Flowmon collectors deployed in parallel, GÉANT was able to have a pilot security solution to manage data flow of this scale live in just a few hours. With a few months of further testing, integration and algorithmic learning, the solution was then ready to protect GÉANT’s entire network from encrypted data threats.


In The Digital Skills Pipeline, A Shift Away From Traditional Hiring Modes

“As digital transformation accelerates and we experience generational shifts, professionals will increasingly desire better work-life balance and freedom from legacy in-office models,” says Saum Mathur, chief product, technology and AI officer with Paro. “Consultancies and others that are reliant on legacy models are struggling to adapt to this new reality, and marketplaces are only furthering these models’ disruption. Three to five years ago, the gig economy pioneers offered customers finite, task-based services that didn’t require extensive experience and enabled flexible scheduling. With continued shifts in the technical and cultural landscape, the gig economy has been extended into professional services, which is powered by highly experienced subject matter experts of all levels.” Corporate culture needs to be receptive to the changes wrought by digital transformation. Forty-one percent of executives in the Alliantgroup survey have encountered employee resistance, while 32$ say they have had “the wrong team or department overseeing initiatives.”


Remote-working jobs: Disaster looms as managers refuse to listen

The Future Forum Pulse survey echoed a sentiment that has been voiced repeatedly over the past 18 or so months: employees have embraced remote working, and see it as a pillar of their future working preferences. Yet executives are more likely than lower-level workers to be in favour of a working week based heavily around an office. Of those surveyed, 44% of executives said they wanted to work from the office every day, compared to just 17% of employees. Three-quarters (75%) of executives said they wanted to work from the office 3-5 days a week, versus 34% of employees. This disconnect between employer and employee preferences risks being entrenched into new workplace policies, researchers found. Two-thirds (66%) of executives reported they were designing post-pandemic workforce plans with little to no direct input from employees – and yet 94% said they were "moderately confident" that the policies they had created matched employee expectations. What's more, more than half (56%) of executives reported they had finalized their plans on how employees can work in the future. 


Will the cloud eat your AI?

"CSPs' cloud and digital services have given them access to the enormous amounts of data required to effectively train AI models," the authors concluded. Such economies of scale have been an asset to the cloud providers for years. Years ago, RedMonk analyst Stephen O'Grady highlighted the "relentless economies of scale" that the cloud providers brought to hardware–they could simply build more cheaply than any enterprise could hope to replicate in their own data centers. Now the CSPs enjoy a similar advantage with data. But it's not merely a matter of raw data. The CSPs also have more experience using that data on a large scale. The CSPs have products (e.g., Amazon Alexa to assist with natural language processing, or Google Search to help with recommendation systems). Lots of data feeding ever-smarter applications feeding more data into the applications... it's a self-reinforcing cycle. Oh, and that hardware mentioned earlier? The CSPs also have more experience tuning hardware to process machine learning workloads at scale. 


Operationalizing machine learning in processes

Operationalizing ML is data-centric—the main challenge isn’t identifying a sequence of steps to automate but finding quality data that the underlying algorithms can analyze and learn from. This can often be a question of data management and quality—for example, when companies have multiple legacy systems and data are not rigorously cleaned and maintained across the organization. However, even if a company has high-quality data, it may not be able to use the data to train the ML model, particularly during the early stages of model design. Typically, deployments span three distinct, and sequential, environments: the developer environment, where systems are built and can be easily modified; a test environment (also known as user-acceptance testing, or UAT), where users can test system functionalities but the system can’t be modified; and, finally, the production environment, where the system is live and available at scale to end users.


MLOps essentials: four pillars for Machine Learning Operations on AWS

Managing code in Machine Learning appliances is a complex matter. Let’s see why! Collaboration on model experiments among data scientists is not as easy as sharing traditional code files: Jupyter Notebooks allow for writing and executing code, resulting in more difficult git chores to keep code synchronized between users, with frequent merge conflicts. Developers must code on different sub-projects: ETL jobs, model logic, training and validation, inference logic, and Infrastructure-as-Code templates. All of these separate projects must be centrally managed and adequately versioned! For modern software applications, there are many consolidated Version Control procedures like conventional commit, feature branching, squash and rebase, and continuous integration. These techniques however, are not always applicable to Jupyter Notebooks since, as stated before, they are not simple text files. Data scientists need to try many combinations of datasets, features, modeling techniques, algorithms, and parameter configurations to find the solution which best extracts business value.


Why Unsupervised Machine Learning is the Future of Cybersecurity

There are two types of Unsupervised Learning: discriminative models and generative models. Discriminative models are only capable of telling you, if you give it X then the consequence is Y. Whereas the generative model can tell you the total probability that you’re going to see X and Y at the same time. So the difference is as follows: the discriminative model assigns labels to inputs, and has no predictive capability. If you gave it a different X that it has never seen before it can’t tell what the Y is going to be because it simply hasn’t learned that. With generative models, once you set it up and find the baseline you can give it any input and ask it for an answer. Thus, it has predictive ability – for example it can generate a possible network behavior that has never been seen before. So let’s say some person sends a 30 megabyte file at noon, what is the probability that he would do that? If you asked a discriminative model whether this is normal, it would check to see if the person had ever sent such a file at noon before… but only specifically at noon.


Sorry, Blockchains Aren’t Going to Fix the Internet’s Privacy Problem

Recently, a number of blockchain-based companies have sprung up with the vision of helping people take control of their data. They get an enthusiastic reception at conferences and from venture capitalists. As someone who cares deeply about my privacy, I wish I thought they stood a better chance of success, but they face many obstacles on the road ahead. Perhaps the biggest obstacle I see for personal-data monetization businesses is that your personal information just isn’t worth that much on its own. Data aggregation businesses run on a principle that’s sometimes referred to as the “river of pennies.” Each individual user or asset has nearly zero value, but multiply the number of users by millions and suddenly you have something that looks valuable. That doesn’t work in the reverse, however. Companies are far more focused and disciplined in the pursuit of millions of dollars in ad or data revenue than one consumer trying to make $25 a year. But why isn’t your data worth that much? Very simply, the world is awash in your information, and you’re not the only source of that information. The truth is that you leak information constantly in a digital ecosystem.


Iranian APT targets aerospace and telecom firms with stealthy ShellClient Trojan

The Trojan is created with an open-source tool called Costura that enables the creation of self-contained compressed executables with no external dependencies. This might also contribute to the program's stealthiness and to why it hasn't been discovered and documented until now after three years of operation. Another possible reason is that the group only used it against a small and carefully selected pool of targets, even if across geographies. ShellClient has three deployment modes controlled by execution arguments. One installs it as a system service called nhdService (Network Hosts Detection Service) using the InstallUtil.exe Windows tool. Another execution argument uses the Service Control Manager (SCM) to create a reverse shell that communicates with a configured Dropbox account. A third execution argument only executes the malware as a regular process. This seems to be reserved for cases where attackers only want to gather information about the system first, including which antivirus programs are installed, and establish if it's worth deploying the malware in persistence mode.


How financial services can invest in the future with predictive analytics

Predictive analytics empowers users to make better decisions that consider what has happened and what is likely to happen based on the available data. And those decisions can only be made if employees understand what they’re working with. They need good data literacy competencies to understand, challenge, and take actions based on the insights, with greater abilities to realise the limitations and question the output of predictive analytics. After all, a forecast’s accuracy depends on the data fuelling it, so its performance could be impacted during an abnormal event or by intrinsic bias in the dataset. Employees must have confidence in their understanding of the data to question its output. This is particularly true when decisions could directly impact customers’ lives, particularly the influential impact of those made in the financial sector – from agreeing to an overdraft and making it to payday to approving a mortgage application in time. 



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley

Daily Tech Digest - October 06, 2021

Deep Learning's Diminishing Returns

While deep learning's rise may have been meteoric, its future may be bumpy. Like Rosenblatt before them, today's deep-learning researchers are nearing the frontier of what their tools can achieve. To understand why this will reshape machine learning, you must first understand why deep learning has been so successful and what it costs to keep it that way. ... Deep-learning models are overparameterized, which is to say they have more parameters than there are data points available for training. Classically, this would lead to overfitting, where the model not only learns general trends but also the random vagaries of the data it was trained on. Deep learning avoids this trap by initializing the parameters randomly and then iteratively adjusting sets of them to better fit the data using a method called stochastic gradient descent. Surprisingly, this procedure has been proven to ensure that the learned model generalizes well. The success of flexible deep-learning models can be seen in machine translation. For decades, software has been used to translate text from one language to another. Early approaches to this problem used rules designed by grammar experts.


IT security and cybersecurity: What's the difference?

Information technology focuses on the systems that store and transmit digital information. Cybersecurity, in contrast, focuses on protecting electronic information stored within those systems. Cybersecurity usually focuses on digital information and infrastructure. Infrastructure may include internet connections and local area networks that store and share information. In short, cybersecurity focuses on preventing hackers from gaining digital access to important data on networks, on computers, or within programs. Workers in IT and cybersecurity have varying job titles depending on their education, training, experience, and responsibilities. One subset of IT, IT security, focuses on protecting access to computers, networks, and information. IT security professionals may create plans to protect digital assets and monitor computer systems and networks for threats. They may also work to protect the physical equipment storing the data, along with the data itself. Another subset of IT, information security, focuses on securing data and systems against unauthorized access. 


How to quit your job and start your business in 90 days

Giving up is a straightforward decision that requires courage, boldness, and a strong belief in what you are about to do. But on the other hand, having a job you don't like can be the worst death sentence for your happiness and personal fulfillment. Quitting your job should be done wisely and in a balanced way, and building a business that replaces the security of income from your previous job is an art. ... Stopping working for someone else doesn't automatically make you able to work for yourself, but it does qualify you to try. Starting a business is like planning an expedition to Mount Everest. Climbing the highest peak in the world requires money, training, a year of planning, and only 49% of those who attempt it make it to the top. A dream without a deadline is a wish. Sitting for months contemplating your idea is one of the worst passive tactics to avoid compromise. Set a date to quit your job and dedicate yourself full time to your business. Just as it is important to set a start date, it is just as important to designate an end date. A date in which with maturity and wisdom you can say "this is not working."


How one coding error turned AirTags into perfect malware distributors

“Security consultant and penetration tester Bobby Rauch discovered that Apple's AirTags — tiny devices which can be affixed to frequently lost items like laptops, phones, or car keys — don't sanitize user input. This oversight opens the door for AirTags to be used in a drop attack. Instead of seeding a target's parking lot with USB drives loaded with malware, an attacker can drop a maliciously prepared AirTag,” the publication reported. “This kind of attack doesn't need much technological know-how — the attacker simply types valid XSS into the AirTag's phone number field, then puts the AirTag in Lost mode and drops it somewhere the target is likely to find it. In theory, scanning a lost AirTag is a safe action — it's only supposed to pop up a webpage at https://found.apple.com/. The problem is that found.apple.com then embeds the contents of the phone number field in the website as displayed on the victim's browser, unsanitized.” The worst part about this hole is that the damage it can inflict is only limited by the attacker’s creativity. 


Why today’s cybersecurity threats are more dangerous

Unlike 20 years ago, when even extensive IT systems were comparatively standalone and straightforward, the interdependencies of systems now make dealing with and defending against threats a much more difficult proposition. "The core problem here is complexity and our interdependence," Snyder said. "That is something that we're not going to move away from because that is providing us flexibility and functionality and all these other critical functions that we need. We've got a growing problem here." One new variable thrown into the digital mix is the meteroic growth of ransomware, which makes it appear that cyberattacks are getting worse. "I think that the ransomware attackers have found a perfectly successful illegitimate business model," Rand Corporation researcher Jonathan Welburn said. "Every time there's a large-scale attack, we see that [victims] issue a payment, and it solves the problem. It's a really good advertisement for that business model." Jay Healey, a senior research scholar at Columbia University, said that at one level, cybersecurity risks are unchanged from what they were two decades ago. "We've been here before," he said. 


The insecure application conundrum: how to stop the influx of vulnerable applications

The fundamental root cause of application insecurities can be attributed to the fact that security awareness training for developers is virtually non-existent. Developers do not willingly deploy applications in the hope that exploits are never found. Instead, there still exists a lack of exposure and experience that plays a part in them not understanding the actual severity of some of the vulnerabilities. At the same time, there is a global shortage of experienced developers, as evidenced, by the fact that vacancies for application development security developers are set to grow 164% in the next five years. Finding an experienced developer with a rounded skillset is like finding a needle in a haystack. As a result, for businesses, there is more economic value in investing in the training of developers in cyber security to build their competence at secure development methods, linked to their business. In essence, there are two major ways to distinguish how vulnerabilities are caused – through technical vulnerabilities and business logic flaws.


Facebook outage was a series of unfortunate events

Facebook says the root cause of its outage Monday involved a routine maintenance job gone awry that resulted in rendering its DNS servers unavailable, but first the entire Facebook backbone network had crashed. To make matters worse, the loss of DNS made it impossible for Facebook engineers to remotely access the devices they needed to in order to bring the network back up, so they had to go into the data centers to manually restart systems. That slowed things down, but they were slowed down even more because the data centers have safeguards in place to make tampering hard—for anybody. “They’re hard to get into, and once you’re inside, the hardware and routers are designed to be difficult to modify even when you have physical access to them,” according to a Facebook blog written by Santosh Janardhan, the company's vice president of engineering and infrastructure. It took time, but once the systems were restored, the network came back up. Restoring the customer-facing services that run over the network was another lengthy process because turning them up all at once could cause another round of crashes. 


The Three Symptoms of Toxic Leadership and How to Get Out of It

Toxicity has eaten deep into the very fabric of what is standard in the workplace. Why is it okay for people to use swear words and hate on one another, but not okay to use words such as love and appreciation? Why has what is supposed to be the norm now considered or seen as being “out there”? That's not right, and a change in this thought pattern is long overdue. Now is the time to educate everyone on the importance of speaking right, doing right, treating each other right in the workplace, and above all, being a nontoxic leader. It’s time we stop being toxic leaders and take action. Once I started studying and analyzing my own toxic traits, I was able to come out of it. And now, I help other successful leaders in tech do the same. For example, I was once working with an engineering manager at a start-up company. She worked around the clock to provide everything for her team. She did sufficient training, was nice to everyone, and provided all the support she possibly could.


Hybrid work: 9 ways to encourage healthy team conflict

Diversity of thought leads to better solutions in the end. “Leaders of high-performing teams consistently convey the importance of conflict and push the team to engage in constructive debate, even to the point that the tension makes team members uncomfortable, to generate the best decisions,” says Andy Atkins, practice leader at BTS Boston. This can be trickier in the hybrid world. “It is more difficult to gauge team members’ reactions, or test the temperature in the room, and it is easier for team members themselves to withdraw from the conversation,” says Atkins. Therefore, leaders must be more deliberate in creating a culture that encourages speaking up. The most successful leaders not only model the willingness to face conflict themselves, but also help team members express their own points of view. “It helps if the team leader takes care to reserve his or her own observations in discussions to allow others to speak first, and to deliberately draw out different opinions around the table before moving on,” says Atkins.


Critical infrastructure IoT security: Going back to basics

Ultimately, IoT devices weren’t built with security in mind. The vast amount of IoT devices tend to be poorly secured, often functioning with out-of-date software or using default security configurations which makes it a vulnerable target for threat actors. The fact is that until the last 5 or 10 years, security wasn’t even something considered as a part of developing OT. It’s not like a hospital buys a new MRI machine every year, so that 10-year-old MRI machine in the hospital is still highly vulnerable since it was built in a time when security wasn’t important or thought of. It is unsurprising that the vulnerability of IoT and the critical infrastructure landscape as a whole to cyberattacks is becoming a growing concern within the security landscape and recent attacks on the sector have proven the need to ramp up security efforts. Even though IoT is becoming an increasing target, the focus on many recent attacks is on OT infrastructure. For that reason, the critical infrastructure industry must take a security-first stance to security their operations. 



Quote for the day:

"Leaders keep their eyes on the horizon, not just on the bottom line." -- Warren G. Bennis

Daily Tech Digest - October 05, 2021

How cloud-native apps and microservices impact the development process

One of the more important coding disciplines in object-oriented programming and SOA is code refactoring. The techniques allow developers to restructure code as they better understand usage considerations, performance factors, or technical debt issues. Refactoring is a key technique for transforming monolithic applications into microservices. Refactoring strategies include separating the presentation layer, extracting business services, and refactoring databases. Robin Yeman, strategic advisory board member at Project and Team, has spent most of her career working on large-scale government and defense systems. Robin concedes, “The largest technology barriers to utilizing agile in building or updating complex legacy systems are the many dependencies in the software architecture, forcing multiple handoffs between teams and delays in delivery.” Robin suggests that refactoring should focus on reducing dependencies. She recommends, “Refactoring the software architecture of large legacy systems to utilize cloud-native applications and microservices reduces dependencies between the systems and the teams supporting them.”


Web3 Architecture and How It Compares to Traditional Web Apps

According to Kasireddy, backend programming for a dapp is entirely different than for a traditional web application. In Web3, she writes, “you can write smart contracts that define the logic of your applications and deploy them onto the decentralized state machine [i.e. the Ethereum blockchain].” Web servers and traditional databases, in this paradigm, are no longer needed — since everything is done on, or around, the blockchain. She notes a bit later in the post that “Smart contracts are written in high-level languages, such as Solidity or Vyper.” Solidity was partly inspired by ECMAScript syntax, so it has some similarities to JavaScript (but is very different in other ways). As for the frontend, that “pretty much stays the same, with some exceptions,” writes Kasireddy. ... There are also complications when it comes to “signing” transactions, which is the cryptographic process that keeps blockchains secure. You need a tool like MetaMask to handle this.


UEFI threats moving to the ESP: Introducing ESPecter bootkit

Even though Secure Boot stands in the way of executing untrusted UEFI binaries from the ESP, over the last few years we have been witness to various UEFI firmware vulnerabilities affecting thousands of devices that allow disabling or bypassing Secure Boot. This shows that securing UEFI firmware is a challenging task and that the way various vendors apply security policies and use UEFI services is not always ideal. Previously, we have reported multiple malicious EFI samples in the form of simple, single-purpose UEFI applications without extensive functionality. These observations, along with the concurrent discovery of the ESPecter and FinFisher bootkits, both fully functional UEFI bootkits, show that threat actors are not relying only on UEFI firmware implants when it comes to pre-OS persistence, but also are trying to take advantage of disabled Secure Boot to execute their own ESP implants. We were not able to attribute ESPecter to any known threat actor, but the Chinese debug messages in the associated user-mode client component leads us to believe with a low confidence that an unknown Chinese-speaking threat actor is behind ESPecter.


Business Leadership Changed: The New Skills You Must Master

Strategic plans are important to achieving your vision, but they can't be set in stone either. The pandemic was an unforeseen situation that took all companies in the world by surprise. Consequently, it is important to be ready to turn, change course quickly, and try to affect the entire organization as little as possible. ... People are inherently social creatures. It should come as no surprise then that we long to feel connected to the people we spend most of our time with. So how can we, as business leaders, help these connections occur between employees? Gregg Lederman is a bestselling author focused on employee interaction. After a long investigation he discovered 3 things that people need at work to feel completely fulfilled: The Need for Recognition: People have a need to be recognized for the skill and perspective they bring and for the challenges they have accomplished; The need for respect: People want to be respected for who they are as individuals and professionals and how they contribute to the team; The need for relationships: People want satisfying relationships with the people they work with.


Encrypted & Fileless Malware Sees Big Growth

“This malware family uses PowerShell tools to exploit various vulnerabilities in Windows,” according to the firm. “But what makes it especially interesting is its evasive technique. WatchGuard found that AMSI.Disable.A wields code capable of disabling the Antimalware Scan Interface (AMSI) in PowerShell, allowing it to bypass script security checks with its malware payload undetected.” ... In just the first six months of 2021, malware detections originating from scripting engines like PowerShell had already reached 80 percent of last year’s total script-initiated attack volume. At its current rate, 2021 fileless malware detections are on track to double in volume year over year. “Malicious PowerShell scripts have been known to hide in the memory of the computer and already use legitimate tools, binaries and libraries that come installed on most Windows systems,” explained the report. “That is why attackers have increased their use of this technique, called living off the land (LotL) attacks. Using these methods, a vaporworm might make its script invisible to many antivirus systems that don’t inspect the scripts or systems’ memory.”


What if Chrome broke features of the web and Google forgot to tell anyone?

Earlier this year Chrome developers decided that the browser should no longer support JavaScript dialogs and alert windows when they're called by third-party iframes. That means that if something is embedded from another website, let's say a YouTube video, Chrome wants to stop allowing that embedded content to call the JavaScript alert function, which opens a small alert window. Eventually Chrome aims to get rid of alert windows altogether. So what happens when Chrome does this? At first nothing because it's an obscure entry in a bug tracker – CC'd to the Web Hypertext Application Technology Working Group (WHATWG) – that Chromium and other browser engineers read. ... You know what isn't happening here? No substantial public discussion happens, certainly not with builders of websites. Google puts its idea forward as bug reports, some folks at Apple working on WebKit and at Mozilla working on Firefox are invited to agree with it in a WHATWG GitHub thread and Bugzilla discussion, and they do. Google gets what it wants and the web breaks.


The Shortfalls of Mean Time Metrics in Cybersecurity

As a measurement standard, mean times are a legacy paradigm brought over from call centers many eons ago. Over the years, cybersecurity leaders adopted similar metrics because IT departments were familiar with them. In today's reality, mean times don't map directly to the type of work we do in cybersecurity, and we can't entirely generalize them to be meaningful indicators across the attack lifecycle. While these averages might convey speed relative to specific parts of the attack lifecycle, they don't provide any actionable information other than potentially telling you to hurry up. In the best-case scenario, MTTX becomes a vanity metric that looks great on an executive dashboard but provides little actual business intelligence. ... The fastest MTTX is not worth anything if it measures the creation of an inaccurate alert. We want mean time metrics to tell us about actual alerts, or true positives and not be skewed by bad data. So, you might be thinking, "how does an untuned MTTX tell you about the quality of work your security provider does, or how safe it makes your systems?" 


How Non-Fungible Tokens Work: NFTs Explained, Debunked, and Legitimized

In a real marketplace, even if the property is intellectual property (such as a patent or copyright, whose form can be entirely digital), there will likewise need to be a contractual transfer of the rights to that intellectual property to a new party, with the transfer again having the full endorsement and power of law behind it. For instance, if in making an intellectual property purchase, I acquire the copyright to a picture, even a digital picture, the real market that operates in our society ensures that the transfer is subject to its laws and strictures. Through my purchase, I will own the picture in a real sense and can take legal action against anyone who tries to infringe on my copyright (such as by posting it on a blog without my permission). By contrast, the concept of owning an NFT on a blockchain is specific to the blockchain with no legal force in the society at large. Suppose I snap a digital photo. Because I’m the one who snapped the photo, US law agrees that I own the copyright to it. 


WebAssembly: The Future of Cloud Native Distributed Computing

In its own right, WebAssembly brings new capabilities and additional security features to modern development environments — both in the browser and with cloud native. However, modern cloud native developers are confronted with new challenges, such as CPU diversity, multiple operating environments, security, distributed application architecture, and scalability, that transcend deployments into a single public cloud provider. To understand the modern distributed computing environment, one must consider the rising diversity inside the public cloud, where we see new ARM CPUs challenging the historical dominance of the x86 chipsets, competing on both cost and performance. Traditional enterprise systems typically compile software to a specific development environment including a CPU and an operating system, such as Linux-32 bit, MacOS-ARM64, or Windows-64bit. Looking past the public cloud towards the edge, we find an even more diverse range of execution environments on an assorted set of CPU architectures.


Post-Quantum: Bi-Symmetric Hybrid Encryption System

A significant difference from commonly employed asymmetric encryption is that during the initial handshake to set up communication, no vulnerable data are exchanged. Should the sender key communication be intercepted by a hacker, they still cannot pretend to be the originator of the communication to the receiver. The encryption itself is achieved by randomly generating keys and interweaving them with portions of unencrypted data to be transmitted, applied to single bytes of data rather than long byte collections. During the initial handshake, private keys are generated from or found in the form of login credentials, credit card information, biometric data, or other personal credential information or pre-shared private keys. The private keys are used to start the handshake and are never actually transmitted. Randomly generated data in the form of challenge codes, counter challenge codes and session keys are exchanged during the handshake. This allows for the client and server to ascertain that the communicator, at the other end, are who they say they are.



Quote for the day:

"Leaders who won't own failures become failures." -- Orrin Woodward

Daily Tech Digest - October 04, 2021

4 Misconceptions about DevSecOps Every CIO Should be wary of

True DevSecOps, like DevOps, necessitates a harmonious collaboration of people, processes, and tools. It’s a culture, automation, and platform design approach that emphasizes security as a shared responsibility across the IT lifecycle. DevSecOps is, in fact, a human as well as a technical challenge. Personal development, culture, and connections with teams and managers are all critical factors in forming a successful DevSecOps team.  ... Cloud and cloud-native software and infrastructure are ideal fit for DevSecOps. It is, nonetheless, useful for a wide range of environments, particularly those who continue to apply a ten-year-old security playbook to their risk profile. Containerized cloud-native environments aren’t the only place where DevSecOps can be used. Some of the technological and process features of DevSecOps, as well as the general shift toward rapid, iterative development cycles – work well with micro-services architecture, but not as well with big monoliths’ many dependencies and extensive test cycles. However, most organizations may benefit from DevSecOps’ cultural features, particularly those that have traditionally considered security as a pre-deployment checkbox rather than a priority ingrained throughout the organization.


Are You Too Late to Start Your Data Science Journey?

What concerned me the most about being too late was not the amount of materials I needed to learn. I’d rather have doubts if I would be able to find a job by the time I learned enough. Data science was a pretty hot topic and there were quite a number of people already working in this field.In the last three years, I have been not only learning data science but also observing the dynamics of this field. My thought about being too late changed. I was not too late to start back then. Moreover, if I started learning data science today, I would not be too late either. ... The biggest challenge for those who want to make a career change to work in data science is finding the first job. I faced the same challenge and it took me about two years to land my first job. This issue is not related to if you are too late to start learning data science. The jobs are out there and increasing. However, without prior job experience, it is difficult to demonstrate your skills and convince employers or recruiters. 


3 fading and 3 future IT culture trends

Whether your IT team is remote, hybrid, or back in the office, all the pivots of 2020 made it clear just how crucial digital transformation is for business. But more than that, it’s important to have the right tech stack – one that’s simple, efficient, and centralized, not scattered or complicated. Adobe Workfront’s State of Work 2021 report indicates that 32 percent of employees have left a job due to inadequate technology that was a barrier to their workflow, and another 49 percent are likely to quit if the tech stack is frustrating or hard to use. IT leaders must scale down their technology in order to consolidate tools and software programs for maximum efficacy. ... While we’re on the subject of a centralized tech stack, let’s talk about the newer trend that has made an imprint on IT culture: the cloud-based workspace. Part of a tech solution called Infrastructure as a Service (IaaS), this digital hub is hosted in the cloud but accessible wherever there’s an internet connection. A cloud-based workspace also eliminates the need for complex hardware or equipment since workers can access it from a wireless device. 


Looking into the future of the metaverse

What will make or break the metaverse will be its ability to capture data from its surroundings and even the biosphere. The only way to do that will be by mass ingestion of the data coming from the Internet of Things. Only with this data will you be able to create a rich and meaningful environment. The next need after “seeing” will be “interacting,” meaning that the data not only needs to be represented in a meaningful way but also must be responsive. On the lowest level, equal to the physiological needs of humans in the real world, you can imagine the needs of a digital infrastructure in the metaverse: tools for ingestion of and access to data and the infrastructure to store, analyse and enrich data. But just like in the real world, before any meaningful interactions can be achieved, security needs to be guaranteed. With all the attention on the exciting possibilities of the metaverse, you could forget what infrastructures will be needed for the heavy lifting. It would have to be optimised for transferring and storing data. To make the metaverse attractive, not only would historical data need to be available, to facilitate context and depth in any interaction, but it would also have to be highly accurate.


5 Practical Steps To Protect Your Business From BYOD Security Risks

In general, personal mobile devices should not be considered the employee’s primary device – they should only be considered a convenience to access chat, email and other cloud apps when using a more secure device is not an option. Note that a VPN is needed when in a public place and an unsecure Wi-Fi network is the only option. Again, it is recommended the employee use their company-provided and managed laptop, not a personal mobile device. Many usage policies actually prohibit employees from connecting to unsecured Wi-Fi in the first place, which solves the problem. ... Another important step to protecting your business against BYOD risks is to create a list of accepted devices for accessing company data. Without a thorough list of the number of BYOD devices in use within an organization’s ecosystem, it’s extremely difficult to effectively measure and mitigate the risk that this poses. Knowing the number of personal devices being used for business tasks allows you to require specific security measures for each type of device. 


How Can Leaders Prepare for the Unexpected?

With the impacts of an inflection point clear, how do organizations operate in a timely fashion to plan and then respond? Francis said, “I tried to use the past to potentially predict future. It didn’t work. Given this, I gather all the critical players together routinely. At the same time, I let the pros do their job and I focus on clearing the way of obstructions.” To be able to do this, Young said it is "important to hire good people, empower them, give them resources they need to operate at the best of their ability, and let them do their jobs. The basics of practicing disaster recovery/business continuity should be built into organization DNA.” CIO Martin Davis claimed, “it is important to think through common scenarios and workout how you would handle them and ensure you have game plans on the shelf that can be adapted for the unexpected. Ensure you learn from previous and have practical advice ready to use and people with the right training.” To do this, Gildersleeve said the organizations needs clear definitions for who is responsible for what areas in advance of the unexpected. 


Learn the Blockchain Basics - Part 9: Blockchain Around the World

From the perspective of a technician, the blockchain is: A transactional platform and distributed accounting ledger using cryptocurrency tokens as a representation of a specific value at the current time (same as fiat). That means that a transaction is carried out by the blockchain nodes, and every member of this blockchain party has a copy of this transaction on their computer (node). Everybody verifies if the entities that are about to do a transaction have enough funds to make this transaction happen. You are basically announcing to all members of this system that you are about to make something happen and, even though this action is happening between two peers, the rest of the network verifies and records the transaction. It is a computing infrastructure that uses the power of the decentralized database with linear cell-space structure, published in a semi-public way (also known as “the block”). It’s an open-source software operating on a development platform of the future. The trust service layer, in combination with Peer to Peer (P2P) network, handles microtransactions and large-value transactions as well - allowing two users to do the same things that a bank would need to do on their behalf.


Donald Knuth on Machine Learning and the Meaning of Life

“The word open source didn’t exist at that time,” Knuth remembers, “but I didn’t want proprietary rights over it, because I saw how proprietary rights were holding things back.” Knuth remembered how IBM had allowed other companies to make their own compilers for IBM’s Fortran programming language — whereas things were different in the typography industry. “Each manufacturer had their own language for composing pages, and that was holding everything back…” But in addition, due to the success of his programming books, “I didn’t need the income! I already had a good job, and people were buying enough books that it would bring me plenty of supplemental income for everything my kids needed for education, whatever,” he said. Referring to a familiar structure in Boolean logic, Knuth quips that income “is sort of a threshold function” — that is, it basically just needs to determine whether a certain minimum has been exceeded. “And so I could specifically see the advantage of making it open for everybody…”


6 data center trends to watch

The struggle to attract and retain staff is an ongoing problem for many data-center owners and operators. Among respondents, 47% report difficulty finding qualified candidates for open jobs, and 32% say their employees are being hired away, often by competitors. In the big picture, Uptime projects that staff requirements will grow globally from about 2 million full-time employee equivalents in 2019 to nearly 2.3 million in 2025. According to Uptime: “New staff will be needed in all job roles and across all geographic regions. In the mature data-center markets of North America and Europe, there is an additional threat of an aging workforce, with many experienced professionals set to retire around the same time—leaving more unfilled jobs, as well as a shortfall of experience. An industry-wide drive to attract more staff, with more diversity, has yet to bring widespread change.” The notion of sustainability is growing in importance in the data-center sector, but most organizations don’t closely track their environmental footprint, Uptime finds. Survey respondents were asked which IT or data-center metrics they compile and report for corporate sustainability purposes. 


Combating vulnerability fatigue with automated security validation

Legacy vulnerability management tools flood security teams with long lists of community prioritized vulnerabilities – there were more than 15,000 vulnerabilities found only in 2020. Of these, only 8% were exploited by attackers. Not to mention the top 30 recently reported by CISA. Currently, it’s a cat and mouse game that the customer can never win – chasing an ever-growing list of vulnerabilities without knowing whether they fixed the ones that attackers want to target, exposed the most risk-bearing vulnerabilities, checked if there is an active exploit for a specific vulnerability, or analyzed what the possible risk and impact is that may originate from a vulnerability. All that context is required for security and IT teams to reduce the risk, maintain business continuity, and be a step ahead of the adversary. Unfortunately, the chase for more and more vulnerabilities has kept us away from the goal of where we want and need to be. At this stage of the battle with cyber adversaries, CISOs can’t go backward into the world of vulnerability fatigue.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis

Daily Tech Digest - October 03, 2021

What Is A Blockchain Wallet & How Does It Work?

Blockchain wallet is digital software that runs on a blockchain and which stores private and public keys as well as monitors and keeps all the transactions related to those keys on a blockchain. Ideally, a blockchain wallet does not store crypto rather all the records relating to these keys are stored on the blockchain on which the wallet is hosted. What it means is that the wallet provides an ID to enable the tracking of all transactions associated with that ID. The blockchain ID is the blockchain wallet address, which is associated with the public key and the private key. Practically speaking, blockchain wallets allow users to store, send, receive, and manage their digital assets on the blockchain.  ... Modern crypto wallets come with integrated APIs to pull data from other platforms. Others can pull data to allow doing charting and crypto market analysis to enable a user to make trading decisions for cryptocurrencies profitably; social features to allow emailing and chatting with other users online or posting status as well as following and copying their trading practices; and transaction tracking including reading history, prices for various cryptos.


Bipartisan US Senate Bill Eyes Cryptomining Oversight

As part of the bill, the Treasury Department would quantify the amount of cryptocurrency mined in the U.S. - and in nations such as China - since 2016. "In order to strengthen U.S. competitiveness, our government must get a better handle on the role that cryptocurrency is playing in the global economy and how it is being leveraged by other countries," Hassan said. Michael Fasanello, who has served in various roles within the U.S. Justice and Treasury departments, including for Treasury's Financial Crimes Enforcement Network, or FinCEN, tells Information Security Media Group that the move "is liable to tax department resources at a time when they ought to be focusing on collaborating with Congress and private industry on appropriately scoped compliance regulation to protect the crypto ecosystem from illicit actors, while … encouraging innovation." Conversely, Neil Jones, a cybersecurity evangelist for the firm Egnyte, tells ISMG, "[This] bipartisan legislation is a breath of fresh air for the cybersecurity industry. ..."


Strategic Planning in the Agile Era

In a world of constant flux, leaders must create strategies that are able to flex and adapt as necessary. As Crawford said, everything starts by leaders creating “a strategy that can evolve over time. Today’s business strategies should anticipate forward looking possibilities. Executing changes need to be nimbler and more responsive.” This means organizations need “to have resource time dedicated to constantly scanning of market and innovation opportunities. You can't respond or get out in front without ensuring people have this as part of their role,” said Young. Organizations clearly need to put in place agile systems and processes that allow let them to not only adapt more quickly but to take stock of the big picture so they can make more informed and strategic decisions. CIO David Seidl’s organization has effectively created a seed fund “with a $50 million investment to do new things, take risks and focus on innovation and creativity. Now we're trying stuff out, learning lessons, and doing an ever-increasing volume of cool stuff.” While the need to take a step back and view the broader picture is clear, unfortunately, Young said, “too many senior people get sucked into operations or day-day project activities.


Virtual Panel: DevSecOps and Shifting Security Left

There certainly is ambiguity and confusion around who exactly is responsible for securing software and the development process - in fact, we recently found in a report that just over half (58%) of security professionals believe it is their responsibility, while a similar number (53%) of developers believe software security falls under their purview. It’s this lack of consensus that is at the crux of today’s biggest cybersecurity challenge: security is not being baked into software during the development process, which has led to destructive cyber repercussions, as we’ve seen recently with the Kaseya, SolarWinds, and Microsoft attacks. It’s just not possible for one team to keep the software build process secure - we need to incentivize developers to work with security teams from the start of development. To be clear: Developers must become responsible and accountable for the security of the software they build and operate. Developers are often prioritizing speed and innovation, and security teams are left to pick up the pieces after software is built to keep it safe from hackers. 


Digital transformation: Thinking beyond the core of your business can help you grow

Many tech leaders have recounted tales of woe of companies that missed transformational shifts in their markets, and perhaps you've referenced Kodak or Blockbuster Video at some point in your career. With the benefit of hindsight, it's all too easy to assume leaders at these companies had grown fat and lazy and willfully ignored the obvious shifts happening before their eyes. However, rather than suffering from a unique and rare collective incompetence, these leaders diligently and dutifully focused on their core business. They probably assumed that transformation was "above their pay grade" or merely a question of applying some novel and interesting technology to today's business with the assumption that they were taking care of areas outside the core. Separating the capabilities and innovative nature of technology from its application ensures that you regularly devote some of your attention, initiatives, and budget to exploring areas outside your organization's core business. You might even be able to leverage seemingly "legacy" technologies that your organization already possesses to areas outside the core and accelerate your company's ability to identify and create truly transformational opportunities with today's tech and skillsets.


Blockchain: How it plays a crucial role in assessment of credit risk in borrowers?

The innovation of blockchain as technology plays an integral role in alleviating the challenges of the traditional lending process mainly in the verification of identities. On the contrary, blockchain is based on distributed ledger technology that decentralises and secures the customers’ data. Simply, it works by keeping the customer data in a distributed ledger instead of centralised storage that also reduces cyber-crime risks. Given the blockchain infrastructure, the profiling of customers becomes accurate, secure and private. Furthermore, all network participants get access to information and record of transactions without affecting the customers’ privacy. The technology of distributed ledger eliminates duplication of record maintenance resulting in the reduction of cost and time involved in the process. Moreover, blockchain is based on immutability which means no participant can tamper with the transaction recorded in the distributed ledger. However, if an error occurs while maintaining the record, it needs to be added to error reversibility which stays visible.


6G technology and taxonomy of attacks on blockchain technology

Most researchers focus only on the blockchain's characteristics or its architecture and propose solutions to overcome some threats or recorded attacks. Instead, the proposed solutions are targeting and enhancing the protocols employed by the blockchain system. Perhaps, that is because the blockchain system is not relatively simple, and it is hard to grasp or untangled its complex architecture. However, without considering the blockchain system's redesigning to alter furtherly, its characteristics would make potential blockchain application's susceptible to different security attacks. ... most of the cryptocurrency's applications share almost a remarkably similar ecosystem, with differences in the consensus protocols. This section focuses on the Bitcoin ecosystem, which is considered one of the origins of the decentralized digital currency, if not the first, but the most associated with the blockchain. Moreover, understanding the Bitcoin ecosystem would, without doubt, set the firm basis for understanding any other existing Blockchain-based ecosystem, let alone innovating new Blockchain-based applications. 


Analysis of Cyber Security In E-Governance Utilizing Blockchain Performance

The block chain architecture comprise of a series of block sequence that encapsulates a total list of transactions; Example; a public ledger. In the block chain architecture, the large block is the block header, which has six other blocks such as parent block hash, nonce, n Bits, time stamp, merkle tree root hash and block version. Within the block header, only one block would be a parent block. In the block chain architecture, the initial block is known as the genesis block, which lacks any parent block. The block chain architecture comprise of blocks and digital signature. A block comprise of a block body and block header as represented in the below figure. The block header comprise of block version that represents the rules of block validation that is to be followed. In a block, the entire transactions’ hash value is represented in the Merkle tree root hash. The present universal time in seconds is represented in the time stamp. A valid block’s hash value is indicated as the target threshold in the n-bits. Nonce indicates a 4 byte field that initiates from zero and rises for each calculation of hash.


Responsible Tech Series 2021 Part 1: exploring ethics within digital practices

AI regulations are starting to take shape, notably in the EU, but with such measures not set to be fully enforced for another few years (implementation of the AI Act in the EU isn’t expected until 2024), Kewley believes that companies aren’t thinking about compliance enough. “Companies think that they’re over the hill when it comes to privacy,” he said. “But compliance isn’t being thought about yet, and it’s now a very real concern to be considered.” Regarding what more can be done to ensure that regulations are suitable globally, Duke suggested keeping track of how products embedded with AI systems, distributed around the world from countries such as China and the US, are designed. “We need a global framework for AI,” she commented. “Work is being done by the US and the World Economic Forum, but this isn’t globally standardised. This needs to be proactive.” On the flipside, a recent survey conducted by Clifford Chance and YouGov, which had participation across Europe and the US, found that 66% of respondents are feeling positive about AI, and Kewley believes that positive discussions about the technology is a step in the right direction.


5 ways leaders can boost psychological safety on teams

Psychological safety starts with the experience of belonging – one of the most basic needs of every human being. However, it is difficult for people to feel that they are part of a shared story if they lack visibility to the most important discussions and decision-making processes in their organization. To address this, I’ve found two things to be especially effective: Sharing openly as much as you can as early as possible, even when you feel you don’t have time; and Co-creating systems that increase transparency in the whole organization. Both take a lot of time, but it always pays off. I schedule weekly updates with my team and also actively use, and invite others to use, systems we have built for improving the flow of information. ... Belonging means not only knowing what’s going on but also feeling close to others. While technology can help with this, it’s not enough. Creating intimacy during these unprecedented hybrid times can be challenging, but small things can go a long way. For example, at Futurice we make a point of sharing our hobbies and interests when we meet new people. 



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing." - Reed Markham

Daily Tech Digest - October 02, 2021

Microservices are Dead — Long Live Miniservices

We tend to think about “microservices” as small, very logic-focused services that deal with, usually, one responsibility. However, if we look at Martin Fowler’s definition of Microservices — and you know, he’s a very smart guy, so we usually like to know what he thinks — you’ll notice we’re missing a very small, yet key, trait about microservices: decoupled. Let’s take a closer look at what we’re calling “microservice”. This term gets thrown around so much these days that is getting to a point where it’s exactly like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it. Truth be told, from 99% of the interviews I take as a manager, when I ask about microservices I get responses about REST APIs. And no, they’re not necessarily the same thing. And by definition, REST APIs alone can’t be microservices, even if you split them up into multiple smaller ones, each taking care of a single responsibility. They can’t, because by definition for you to be able to use a REST API directly, you need to know about it.


Google’s State of DevOps 2021 Report: What SREs Need to Know

SREs may not think of cloud strategy as a core part of their job responsibility. That’s a task that more commonly falls to cloud architects. But simply encouraging their organizations to leverage more reliable cloud architectures can be one way to improve reliability, according to the State of DevOps report. While enhanced reliability is not the only reason why more and more organizations are now expanding into multi-cloud and hybrid cloud architectures, increased availability was the second most common reason for adopting one of these strategies among the professionals whom Google surveyed. The report also noted that organizations with multi-cloud or hybrid cloud architectures were 1.6 times more likely to meet or exceed their performance goals. The takeaway here for SREs is that, although having more clouds to manage creates new reliability challenges in some respects, the data clearly shows that multi-cloud and hybrid cloud lead to better reliability outcomes in the long run. It’s time to let go of your single cloud.


Change management and adaptation for Enterprise Architecture Practitioner

By asking questions like, “Are these applications still relevant?” or “Is this system working?” or “How I can I make this system better?” Assess how you can make a difference to add value and propel your organization to become an industry leader. The complex environment, fueled by continued advances in technology, hinders the ability of the organization to realize value. The enterprise architecture solution will likely not deliver immediate returns (Gong & Janssen, 2021). Kotusev (2018) noted that a rigid approach to enterprise architecture implementation is the worst strategy. Persistent evaluation and adaptation of the EA solution are necessary to signal the need for adaption. It is appropriate to have parts of the EA strategy remain purposively generalized (Alwadain, 2020; Marcinkowski & Gawin, 2019). For example, a flexible EA solution can quickly transition to a SaaS (software as a solution) that delivers more value than on-premises operations. Cooiman (2021) noted that considering operations that directly support and influence portfolios, programs, projects, and business functions, such as supply chain management and payroll. 


The Togaf® Standard Cited As GovTech Solution By The World Bank Group

As the report notes, previous surveys have not captured the full scope of work happening in GovTech in a reliable way. The Open Group has, as its mission, a long-standing focus on the open flow of information – Boundaryless Information Flow™. Transparent information-sharing makes connected systems worth more than the sum of their parts and makes innovation easier to spread. Likewise, the GTMI’s clear view of where progress is being made in government digitalization is something which will, I think, help to accelerate the modernization of public sector services globally. Indeed, much of the report’s key insights are concerned with ensuring that GovTech infrastructure is interconnected and interoperable. Often, it finds, countries have discrete digitalized workflows such as a back-office solution or an online service portal, but are yet to knit these workflows together. Likewise, while digital workflows open the door to two-way information flow with citizens, making services more efficient and responsive, this has seen only limited global rollout.


Working with Metadata Management Frameworks

Get an MMF Baseline: Even if no formal MMF exists in an organization, an implicit one does. Technical documents mapping data architecture, the knowledgeable business analyst who others turn to understand reporting data, and data-entry procedures provide context around an organization’s data and pieces of its MMF. Getting a baseline about what people, processes, and technology already exist and how they inform the organization’s Metadata Management framework just makes sense. Using a “qualified and knowledgeable data professional (and other skilled talents) to administer and interpret data readiness assessments” along with Data Maturity models like those put forth by Gartner, or the Capability Maturity Model of Integration (CMMI), gives a good MMF starting place. Be Clear About What an MMF Will Achieve: Be clear why an organization needs to manage metadata and implement a Metadata Management framework. Metadata Management helps reduce training costs, provides better data usage across data systems, and simplifies communication.


European Blockchain Services Infrastructure (EBSI): the European way to get most out of blockchain

EBSI is designed with a number of core principles in mind: working towards the public good; transparent governance; data compatibility; open-source software; and, compliance with relevant EU regulations such as the GDPR and eIDAS. EBSI would provide a common, shared and open public infrastructure aimed at providing and supporting a secure and interoperable ecosystem that will enable the development, launch and operation of EU-wide cross-border digital services in the public sector. The infrastructure will reflect European values data sovereignty and green credentials in mind and tackle global issues – such as climate change and supply chain corruption. EBSI would thereby deliver public services with high requirements of scalability and throughput, interoperability, robustness, and continuity of the service and with the highest standards of security and privacy that will allow public administrations and their ecosystems to verify information and make services trustworthy. This infrastructure should be deployed within a period of 3 years.


Focus on three areas for a holistic data governance approach for self-service analytics

The right tooling will help you put your governance framework into practice, providing the necessary guardrails and data visibility that your teams need to boost trust and confidence in their data analysis. Perhaps the most fundamental tool for data governance—certainly the greatest help for us here at Tableau—is our integrated data catalog. This enables employees to see data details like definitions and formulas, lineage and ownership information, as well as important data quality notifications, from certification status to events, like if a data source refresh failed and the information isn’t up to date. A data catalog boosts the visibility of valuable metadata right in people’s workstreams, whether that metadata lives in Tableau or is brought in from an external metadata management system via an API. This also helps IT with impact analysis and change management, to understand who and which assets are affected downstream when changes are made to a table.


Private distributed ledger technology or public blockchain?

A centralized DLT is not immutable. The ledger can be rewritten arbitrarily by the one (or more) who controls it or due to a cyberattack. Because of its open and competitive nature (mining, staking, etc.), any blockchain can achieve immutability and hence its records will be credible. Thousands of independent nodes can ensure an unprecedented level of resistance to any sort of attack. Usually, it comes next after the discussion about immutability. How to correct a mistake? What if you need to change your smart contract? What if you lost your private key? There is nothing you can do retroactively — alteration in the blockchain is impossible. What’s done is done. In this regard, the DLT is usually the opposite of an alternative to blockchain. You will hear that DLTs can be designed so that those who control the network verify transactions on entry and therefore, non-compliant transactions are not allowed to pass through. But it would be a fallacy to think that censorship in the network will ultimately exclude all mistakes and unwanted transactions. There will always be a chance for a mistake.


Can blockchain technology fill the trust gap for your business?

The extensive documentation, verified by third party brokers, that has underpinned trading and commercial agreements in the past is at odds with digital ways of working. The same steps of these processes need to be maintained, but conducted through digital interfaces that are more open and more complex.Distributed Ledger Technologies (DLT) can fill this gap. Distributed ledger describes the approach of creating equal decentralized copies of transactions, instead of storing them in one central place (ie a database for digital, or a document for analogue). What makes DLT so exciting and relevant is that it was conceived and developed for this decentralized digital world where trust is at a premium. Instead of being built on existing relationships, trust can be anchored in encrypted processes (the so-called consensus algorithms), which control the transactions. It's not simply a case of storing the information safely that creates trust, it's also how it's collected. DLT can determine the conditions under which nodes of the decentralized infrastructure capture and record new transactions and when they do not.


Achieving New Levels of Resilience Through Use of Cloud-Based Software and Agile Ways of Working

In general, agile teams work with robust methods and practices across different groups and their ecosystem. Tools-driven approach and automated engineering enable building a continuous and connected ecosystem where captured feedback and user behavior are analyzed and actioned. Automated engineering helps in making and delivering a better customer experience for the users. Digital-first does not work in silos; it builds products and platforms to connect and create an ecosystem. Traditionally, we dealt with effort, counts, rollback, monthly release, etc.; under the guise of agile, KPIs were to suit the management communication pattern and reporting. Modern-day engineering focuses on the outcome. Failure is noticed and fixed rapidly, but how quickly and relative improvements are the real questions. In this ecosystem, the end customer sees the change immediately. The measurement of success of the ecosystem has several performance indicators like MTTX, lead time /cycle time, deployment rates, etc., on the development side. 



Quote for the day:

"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford