Daily Tech Digest - October 28, 2021

Using Complex Networks to improve Machine Learning methods

Let’s start by defining what a complex network is: a collection of entities called nodes connected between themselves by edges that represent some kind of relationship. If you’re thinking: this is a graph! Well, you are correct, most complex networks can be considered a graph. However, complex networks usually scale up to thousands or millions of nodes and edges, which can make them pretty hard to analyze with standard graph algorithms. There is a lot of synergy between complex networks and the data science field because we have tools to try and understand how the network is built and what behavior we can expect from the entire system. Because of that, if you can model your data as a complex network, you have a new set of tools to apply to it. In fact, there are many machine learning algorithms that can be applied to complex networks and also algorithms that can leverage network information for prediction. Even though this intersection is relatively new, we can already play around with it a bit.


How to Find a Mentor and Get Started in Open Source

What separates open source from its proprietary counterpart is the open source community, made up of a mix of volunteers, super-fans and über-users of a product or suite of products. So while it’s reasonably overwhelming to think where to start, there’s the unique benefit of built-in communities to support you. It’s good to start with an idea of what you want to get out of your contribution — a job, a mentor, experience in a methodology, service, interest or coding language. Use the CNCF project landscape to search by your interest — monitoring, securing, or deploying, for example — or by organization or skillset. Next, think if you want to be part of one of the biggest, horizontal communities or if you’re feel more comfortable in a smaller niche. And then it’s about deciding what you want to put in to achieve that goal. For Mohan, contributing to open source projects gives her experience in a wider breadth of technologies outside of her job, including in Kubernetes and chaos engineering.


Securing a New World: Navigating Security in the Hybrid Work Era

Security doesn’t get any easier with some workers returning to the office, others staying home and quite a few doing a bit of both. That’s because the office, which was once the company’s security standard, is often full of devices that have been sitting idle since early last year. Security patches, which are issued all the time, are important to install at the point they’re published. But a computer that has been turned off for a year, unable to download patches, is a vulnerable device. And there may be dozens or even hundreds of patches waiting in the queue that are needed to bring a device up to par. There are, not surprisingly, a host of recommendations that experts have offered to help security teams in their work. Educating employees on the threats that people and companies face is one of their top suggestions. A survey from Proofpoint’s State of the Phish report emphasizes the need for a people-centric approach to cybersecurity protections and awareness training that accounts for changing conditions, like those constantly experienced throughout the pandemic. 


Now’s the time for more industries to adopt a culture of operational resilience

When you think about resiliency and doing work in operational models, it’s a verb-based system, right? How are you going to do it? How are you going to serve? How are you going to manage? How are you going to change, modify, and adjust to immediate recovery? All of those verbs are what make resiliency happen. What differentiates one business sector from another aren’t those verbs. Those are immutable. It’s the nouns that change from sector to sector. So, focusing on all the same verbs, that same perspective we looked at within financial services, is equally as integratable when you think about telecommunications or power. ... We’re seeing resiliency in the top five concerns for board-level folks. They need a solution that can scale up and down. You cannot take a science fair project and impact an industry nor provide value in the quick way these firms are looking for. The idea is to be able to try it out and experiment. And when they figure out exactly how to calibrate the solution for their culture and level of complexity, then they can rinse, repeat, and replicate to scale it out.


AWS's new quantum computing center aims to build a large-scale superconducting quantum computer

The launch of the AWS Center for Quantum Computing sees Amazon reiterating its ambition to take a leading role in the field of quantum computing, which is expected to one day unleash unprecedented amounts of compute power. Experts predict that quantum computers, when they are built to a large enough scale, will have the potential to solve problems that are impossible to run on classical computers, unlocking huge scientific and business opportunities in fields like materials science, transportation or manufacturing. There are several approaches to building quantum hardware, all relying on different methods to control and manipulate the building blocks of quantum computers, called qubits. AWS has announced that the company has chosen to focus its efforts on superconducting qubits -- the same method used by rival quantum teams at IBM and Google, among others. AWS reckons that superconducting processors have an edge on alternative approaches: "Superconducting qubits have several advantages, one of them being that they can leverage microfabrication techniques derived from the semiconductor industry," Nadia Carlsten tells ZDNet.


The causes of technical debt, and how to mitigate it

There is no single silver bullet that will fix technical debt. Instead, it needs to be addressed in a multi-faceted way. First, there needs to be a better cultural understanding across the entire business regarding precisely what it is. Importantly, stakeholders, including product owners, must also understand how their actions and decisions may be contributing. Going back to the credit card analogy, it helps if stakeholders can bear in mind that they could be dealing with 22% or higher annual interest. In such a case, the temptation to ‘spend’ beyond the team’s limits and live with minimum payments is less tempting. To pay off existing architectural and other types of technical debt, teams should compare their current minimum payments and the impact of those on overall velocity and team morale with the staggering expense of re-architecting part or all of a solution. Moving from a monolith to microservices is a good example. As mentioned, however, there is no one-size-fits-all solution. Long-term maintenance and ‘expenses’ need to be considered as well.


Why aren’t optical disks the top choice for archive storage?

Optical media is also designed with full backwards compatibility, meaning future BD-R and ODA drives will be able to read disks written in today’s drives. For example, you can read a CD-R disk written in 1991 in a current BD-R drive. In contrast, LTO-8 tape drives cannot read LTO-5 tape although they can read LTO-6 tapes. BD-R drives advertise a lifetime of 50 years and Sony advertises 100 years, both of which are longer than tape (30 years) and magnetic hard drives (five years). If you wanted a 50-year archive on LTO, you would be forced to migrate data at least once to avoid bit rot but not, as some optical marketing material suggests, every 10 years. Many people do this anyway to allow them to retire older tape drives and achieve greater storage density. There is also no current requirement to re-tension the tapes every so often. There is some debate about the bit error rate of optical versus tape, but that is a complex issue beyond the scope of this article.


How to develop a high-impact team

Innovation is increasingly becoming a team sport, requiring diverse perspectives and collective intelligence. These innovation-focused teams tend to be ephemeral. They form, collaborate, and disband quickly. Team members need to be able to step up and step back with equal ease. To participate in this fast, fluid model of leadership, less assertive employees (and those uninterested in careers in management) will likely need help stepping up. To get these reluctant leaders to step up and then step back, provide a path of retreat. Show them that being a designated leader can be a temporary assignment, existing for the duration of a project or even for just a single meeting. Some team members will need encouragement and support to become “step-up” leaders, but others will do so with ease. It can take work to then get them to step back and support others. You can help these people develop a more fluid leadership style by modeling healthy followership practices. Let them see you collaborating with a peer organization or contributing to a project led by someone below you in the management hierarchy.


Why automation progress stalls: 3 hidden culture challenges

“A general challenge with putting automation in place is that IT culture often focuses on heroic problem-solving rather than more mundane processes that prevent problems from happening in the first place,” says Red Hat technology evangelist Gordon Haff. “Automation has long been part of the picture – think system admins writing Bash scripts – but it’s also been reactive rather than proactive.” If your organization has treated automation mostly as a reactive problem-solver in the past, people may be less inclined to instinctively grasp its greater value. That’s where leaders have work to do in terms of communicating your big-picture plan and the role that automation – and everyone on the team – plays in it. This is also a mindset that must shift over time with experience and results: Automation should be as much (or more) about improvement and optimization as it is about dousing production fires or cutting costs. Ideally, automation should be boring, in the best possible sense of the word. “Modern automation practices, such as we often see in SRE roles, make automating systems and workflows part of the daily routine,” Haff says.


Regulation fatigue: A challenge to shift processes left

President Biden’s recent executive order asks government vendors to attest “to the extent practicable, to the integrity and provenance of open source software used within any portion of a product.” The president’s recent order, and the potential actions of legislators to follow, could lead to burdensome regulations that interfere with shift left practices, and ultimately slow down the pace of software development. The challenge with the directive is that nearly 60 percent of software developers have little to no secure coding training. Developers are traditionally focused on pushing out innovative, stable products, not triaging security alerts. They want to use open-source code without thinking about its possible security risks. Developers rely on open-source components because these are ready-made pieces of code that allow them to keep up with competitive release time frames. They often leave it to their security teams to identify mistakes at the end of the development process. Developers’ reliance on open-source components often presents a challenge to the cautious attitude of security teams. 



Quote for the day:

"Leaders, be mindful that there is a tendency to become arrogant. Such hubris blinds even the best intentions. Lead with humility." -- S Max Brown

Daily Tech Digest - October 27, 2021

Node.js makes fullstack programming easy with server-side JavaScript

Web application developers are inundated with options when it comes to choosing the languages, frameworks, libraries, and environments they will use to build their applications. Depending on which statistics you believe, the total number of available languages is somewhere between 700 and 9000. The most popular—for the past nine years according to the 2021 Stack Overflow Developer Survey—is JavaScript. Most people think of JavaScript as a front-end language. Originally launched in 2009, Node.js has quickly become one of the most widely used options among application developers. More than half of developers are now using Node.js—it is the most popular non-language, non-database development tool. It allows you to run JavaScript on the server side, which lets software engineers develop on the full web stack. Node.js’s popularity has snowballed for good reason. Node.js is a fast, low-cost, effective alternative to other back-end solutions. And with its two-way client-server communication channel, it is hard to beat for cross-platform development.


Your Data Plane Is Not a Commodity

If you are going to invest a ton of time, effort and engineering hours in a service mesh and a Kubernetes rollout, why would you want to buy the equivalent of cheap tires – in this case, a newer and minimally tested data plane written in a language that may not even have been designed to handle wire-speed application traffic? Because, truly, your data plane is where the rubber meets the road for your microservices. The data plane is what will directly influence customer perceptions of performance. The data plane is where problems will be visible. The data plane will feel scaling requirements first and most acutely. A slow-to-respond data plane will slow the entire Kubernetes engine down and affect system performance. Like tires, too, the date plane is relatively easy to swap out. You do not necessarily need major surgery to pick the one you think is best and mount them on your favorite service mesh and Kubernetes platform, but at what cost?


Why traditional IP networking is wrong for the cloud

Of course, the IP networking layer does provide a way to connect your data center to the cloud. However, one of the main challenges of legacy networking is that it provides limited visibility into applications in the cloud—the lifeblood of enterprises today and arguably the primary driver behind cloud adoption. At Layer 7, or the so-called application layer, enterprises have a holistic view of what takes place at that level (applications and collections of services) as well as in the stack below, such as at TCP and UDP ports and IP endpoints. By operating with the traditional stack (i.e, the IP layer) alone, enterprise teams have a substantially harder time viewing what is above them in the stack. They have a view of the network alone, and blind spots for everything else. Why does this matter? For one, it can significantly increase remediation time when performance problems occur. Indeed, enterprises need to understand how their cloud infrastructure works in relation to the application and A/B test configurations to align with application performance.


Defining the Developer Experience

Microservices architecture and cloud-native applications go hand in hand. Most organizations leverage a microservice architecture to decouple and achieve greater scale, as without it you have too many people changing the same code, causing velocity to slow as friction increases. Where in monolithic architecture, teams would be bumping into each other to merge, release, and deploy their changes to the monolith, in a microservices architecture, each team can clearly define the interfaces between their components, limiting the size and complexity of the codebase they are managing to that of a smaller, more agile team. Each team can move more quickly since they can focus on the components they own. Their level of friction and velocity can be that of just the group working on that component, not that of the larger development organization. ... But this creates its own problems as well, a key being the complexity of needing to ensure the cohesive whole also gets tested and functions together as a complete software product.


How we built a forever-free serverless SQL database

How can we afford to give this away? Well, certainly we’re hoping that some of you will build successful apps that “go big” and you’ll become paying customers. But beyond that, we’ve created an innovative Serverless architecture that allows us to securely host thousands of virtualized CockroachDB database clusters on a single underlying physical CockroachDB database cluster. This means that a tiny database with a few kilobytes of storage and a handful of requests costs us almost nothing to run, because it’s running on just a small slice of the physical hardware. ... Given that the SQL layer is so difficult to share, we decided to isolate that in per-tenant processes, along with the transactional and distribution components from the KV layer. Meanwhile, the KV replication and storage components continue to run on storage nodes that are shared across all tenants. By making this separation, we get “the best of both worlds” – the security and isolation of per-tenant SQL processes and the efficiency of shared storage nodes.


Why Outdated jQuery Is Still the Dominant JavaScript Library

Despite its enormous usage, developers today may not even be aware that they’re using jQuery. That’s because it’s embedded in a number of large projects — most notably, the WordPress platform. Many WordPress themes and plugins rely on jQuery. The jQuery library is also a foundational layer of some of today’s most popular JavaScript frameworks and toolkits, like AngularJS and Bootstrap (version 4.0 and below). “A lot of the surprise about jQuery usage stats comes from living in a bubble,” Gołębiowski-Owczarek told me. “Most websites are not complex Web apps needing a sophisticated framework, [they are] mostly static sites with some dynamic behaviors — often written using WordPress. jQuery is still very popular there; it works and it’s simple, so people don’t feel the need to stop using it.” jQuery will continue to be a part of WordPress for some time to come, if for no other reason that it would be difficult to remove it without breaking backward compatibility. 


How AI and AR are evolving in the workplace

Businesses are also using AR-based apps for tracking, identifying, and resolving technical issues as well as for tasks, such as retrofitting, assembling, manufacturing, and repairing production lines. The AI market is not only anticipated to help the development of enterprise, it is also believed that the technology can also help to achieve business growth objectives and generate value. Nine out of 10 C-suite executives believe they must leverage AI to achieve their growth objectives. ... The challenge of deploying evolving technologies, is always that until they have fully matured, integration can be a challenge. With smart glasses as well, there can also be security and privacy concerns. In medical and surgical settings for example, the use of cameras in operation rooms is very sensitive and controversial. For sensitive scenarios like these, the use of such devices must be agreed and understood to be for the benefit of all beforehand. While AI is a more developed technology, it is also costly, and may require a strong upfront investment.


Good security habits: Leveraging the science behind how humans develop habits

There is a secret recipe for good security habits that we’ve discovered from decades of research: it’s called the habit loop. And you can use the habit loop to hack your own brain for better security. You start with a prompt – which is just the signal that tells you to start a behavior. Then there’s the behavior itself. And finally, the most important step, giving yourself a reward. Even if the reward is just patting yourself on the back, your brain starts to release endorphins so when you see the prompt again next time, your brain will want to do that behavior again to receive another reward. Security can seem scary to some people while to others it might feel like it’s too much work. Using the habit loop can help make security feel easy, because we don’t have to think about habits: by definition they are what we do when we’re on autopilot. But since habits make up about 50% of everything we do in our lives, it’s also the best way to have a massive impact on our security.


More Tech Spending Moves Out of IT

Karamouzis says this is leading to a shift in how organizations buy technology. Enterprises had previously moved from buying products to buying solutions -- a combination of products and services. These products and solutions were purchased in a serial fashion. That doesn’t work anymore, says Karamouzis because now you must make four to 10 buying decisions concurrently to ensure different digital business initiatives lead to growth. This is part of a new way organizations buying; they are buying “outcomes,” she says. These changes have pushed organizations more to the public cloud, making enterprises and the entire global economy increasingly dependent on internet-delivered services. The most important of these services are provided directly by or running within hyperscale cloud services providers, says Gartner VP analyst Jay Heiser. “As everything becomes digital, virtually every aspect of society and the economy will have dependence upon the real-time functioning of a small number of public cloud services,” Heiser says.


Why Soul-Based Leadership Will Change the Nature of Remote and Hybrid Work

One of the most highly researched and evidence-based ways to invigorate executive function is through the ancient practice of mindfulness. Although it’s taken on a relatively "pop" aura relative to 2500 years ago, developing mindfulness is actually hard work! But the payoff is big in terms of making more informed decisions and leading with care. I often recommend one technique I learned from one of my teachers that I’ve personally modified a bit and called the Standing Ground Practice. You can be anywhere: sitting or standing at your desk or waiting on a corner to meet a friend. It’s ideal if you can go outside and stand facing a tree or something alive that’s naturally rooted in the earth, but it’s not necessary for the practice to be effective in this context. After finding your spot, bring your attention to the contact point between your feet and the ground or floor beneath you. Focus on that point and consider what it feels like. Thoughts about all kinds of things will most certainly interrupt. 



Quote for the day:

"Discipline is the bridge between goals and accomplishment." -- Jim Rohn

Daily Tech Digest - October 25, 2021

Why you should use a microservice architecture

Simply moving your application to a microservice-based architecture is not sufficient. It is still possible to have a microservice-based architecture, but have your development teams work on projects that span services and create complex interactions between your teams. Bottom line: You can still be in the development muck, even if you move to a microservice-based architecture. To avoid these problems, you must have a clean service ownership and responsibility model. Each and every service needs a single, clear, well-defined owner who is wholly responsible for the service, and work needs to be managed and delegated at a service level. I suggest a model such as the Single Team Oriented Service Architecture (STOSA). This model, which I talk about in my book Architecting for Scale, provides the clarity that allows your application—and your development teams—to scale to match your business needs. Microservice architectures do come at a cost. While individual services are easier to understand and manage, a microservices application as a whole has significantly more moving parts and becomes a more complex beast of its own.


Routine is a new productivity app that combines task management and notes

One of the most opinionated feature of Routine is the dashboard. Whatever you’re doing on your computer, you can pull up the Routine dashboard with a simple keyboard shortcut. By default, that shortcut is Ctrl-Space. The Routine app adds an overlay on top of your screen with a few widgets. It looks a bit like the now-defunct Dashboard on macOS. On that dashboard, you’ll find a handful of things. On the left, you can see the tasks you have to complete today. On the right, you can see how much time you have left before your next meeting and some information about that event. The date is pulled directly from your Google Calendar account. In the center of the screen, Routine displays a big input field called the Console. You can type text and then press enter to create a new task from there. It works a bit like the ‘Quick Add’ feature in Todoist. The idea is that you can add a task without wasting time opening your to-do app, moving to the right project, clicking the add task button and entering text into several fields. With Routine, you can press Ctrl-Space, type some text, press enter and you’re done.


3 Lessons I Learned From The Hard Way As A Data Scientist

Whatever algorithm you implement or analysis you make, the results are used in the continuing processes or production. Thus, it is of vital importance to make sure the results are correct. By results being correct, I do not mean not having any errors on your predictions or hitting 100% accuracy which is not reasonable or legitimate. In fact, you should be really suspicious of results which are too good to be true. The mistakes I mention are usually data related issues. For instance, you might be making a mistake while joining stock information of products from an SQL table to your main table. It results in serious problems if your solution is based on product stocks. There are almost always controls in your code that prevent making mistakes. However, it is not possible for us to think of each and every possible mistake. Thus, taking a second look is always beneficial. ... The glorious world of machine learning algorithms is very attractive. The urge for using a fancy algorithm and building a model to perform some predictions might cause you to skip digging into the data.


Research finds consumer-grade IoT devices showing up... on corporate networks

"Remote workers need to be aware that IoT devices could be compromised and used to move laterally to access their work devices if they're both using the same home router, which in turn could allow attackers to move onto corporate systems," said Palo Alto. Poor IoT device security stems mainly from manufacturers' desire to keep price points low, cutting security out as an unnecessary overhead. This approach inadvertently exposed large numbers of easily pwned devices to the wider internet – causing such a headache that governments around the world are now preparing to mandate better IoT security standards. Even IoT trade groups have woken up to the threat, albeit perhaps the threat of regulation rather than the security threat, but if that's what it takes, the outcome is no bad thing. ... Half of respondents said they worried about attacks against their industrial IoT devices, with 46 per cent being similarly worried about connected cameras being compromised. Smart cameras are a tried-and-trusted compromise method for miscreants


The Rise Of No-Code And Low-Code Solutions: Will Your CTO Become Obsolete?

There are many reasons behind the rise of no-code and low-code tools, but the key one is a large imbalance between the ever-growing demand for software development services and the shortage of skilled developers in the market. For decades, there's been movement toward a withdrawal from complicated coding in favor of easy-to-use visual tools. However, over time, no-code and low-code platforms have become more sophisticated, allowing non-developers to build more powerful websites and applications without hiring software specialists. That has even evoked some neo-Luddite concerns and discussions about the potential of such platforms to make good old software developers obsolete. But what’s behind it? Both no-code and low-code approaches hide the complexities of software programming under the mask of high-level abstractions. Low-coding reduces programming efforts down to minimum levels, and no-coding empowers anyone to create apps without any knowledge in programming.


Complex Systems: Microservices and Humans

There is one aspect to this that I think is worth talking about, and that is that we actually already have an organization of people. We work in organizations that are, in general, organized into teams. You see a theoretical org chart here on the left. This might look like something that you might see in your own companies. We have these org charts, and these organizations of teams. Then that org chart doesn't map very neatly onto the microservices architecture necessarily, and maybe it shouldn't. The interrelationships between these teams are actually more subtle and often more complicated than what you see in the org chart. That is because if you have microservices, and you have dependencies between these microservices and interactions between them, then the teams owning them, by necessity, sometimes need to interact with each other. Microservices are constructed in a way that gives as much independence as possible and as much autonomy as possible to the individual teams. 


Maximizing agile productivity to meet shareholder commitments

Companies’ public commitments to ambitious—and sometimes expansive—goals tend to have multiyear timelines, while agile teams are trained to focus on the next three to six months. In organizations with siloed processes, product owners often feel that they don’t have enough visibility into their organizations’ processes to forecast the timeline for their initiatives, let alone to predict the long-term impact of their work. To balance the demands of the near future with longer-term goals, the companies that meet their transformation goals support agile teams with information and expertise. Successful companies provide product owners with relevant financial and operational data for the company, benchmarked to best-in-class organizations, to help them assess the potential value of their work for the next 18–24 months. They also assign initiative owners and relevant subject-matter experts from business functions early in the research and discovery process to help quantify possible improvements to the existing journey.


Satellite IoT dreams are crashing into reality

Even with smaller satellites, building a profitable wireless network is hard. On one side, there’s a capital-intensive phase that requires establishing connectivity (in this case, by building and launching satellites) and on the other, these companies must establish a market for the connectivity. But while the economics of building and launching satellites have changed dramatically, the demand for devices that rely on satellite networks hasn’t kept up. The biggest growth has come from people-tracking products, such as the Garmin inReach walkie-talkies, which people can wear into the wilderness and use to get help if needed. There are also rumors that Apple may include some form of satellite service in an upcoming iPhone. While this is a real and growing market, however, it isn’t enough to justify the launch of constellations by almost a dozen companies whose goal is to be IoT connectivity providers. So former connectivity players eschew bandwidth and turn to full solutions in order to provide a service that isn’t a commodity and eke out more revenue per customer.


Interesting Application Garbage Collection Patterns

When an application is caching many objects in memory, ‘GC’ events wouldn’t be able to drop the heap usage all the way to the bottom of the graph (like you saw in the earlier ‘Healthy saw-tooth pattern). ... you can notice that heap usage keeps growing. When it reaches around ~60GB, the GC event (depicted as a small green square in the graph) gets triggered. However, these GC events aren’t able to drop the heap usage below ~38GB. Please refer to the dotted black arrow line in the graph. In contrast, in the earlier ‘Healthy saw-tooth pattern’, you can see that heap usage dropping all the way to the bottom ~200MB. When you see this sort of pattern (i.e., heap usage not dropping till all the way to the bottom), it indicates that the application is caching a lot of objects in memory. When you see this sort of pattern, you may want to investigate your application’s heap using heap dump analysis tools like yCrash, HeapHero, Eclipse MAT and figure out whether you need to cache these many objects in memory. Several times, you might uncover unnecessary objects to be cached in the memory. Here is the real-world GC log analysis report, which depicts this ‘Heavy caching’ pattern.


Designing the Internet of Things: role for enterprise architects, IoT architects, or both?

Great use cases, but an architectural nightmare that calls for a new role to plan and piece it all together into a coherent and viable system. This may be someone in a relatively new role, an IoT architect, or expanding the current roles of enterprise architects. The need for architects of either stripe was recently explored in a Gartner eBook, which looked at the ingredients needed to ensure success with enterprise IoT. ... Those having such capabilities in two or more of these areas will be in extremely high demand. The good news is that organizations can use existing digital business efforts to train up candidates." Responsibilities for the IoT architect role include the following: "Engaging and collaborating with stakeholders to establish an IoT vision and define clear business objectives."; "Designing an edge-to-enterprise IoT architecture."; "Establishing processes for constructing and operating IoT solutions."; and "Working with the organization's architecture and technical teams to deliver value." Then there's the enterprise architect -- who are likely to see their roles greatly expanded to encompass the extended architectures the IoT is bringing. 



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - October 24, 2021

Artificial Intelligence Is Smart, but It Doesn’t Play Well With Others

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges — like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning. A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical “reward” by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI aren’t programmed to follow “if/then” statements, because the possible outcomes of the human tasks they’re slated to tackle, like driving a car, are far too many to code. “Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won’t necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data” Allen says. “The sky’s the limit in what it could, in theory, do.”


CDR: The secret cybersecurity ingredient used by defense and intelligence agencies

Employees in the defense and intelligence sector are in near-constant contact with each other, sharing information often under challenging circumstances. They move files and documents from low trust environments into networks that hold a nation’s most sensitive data, where a data breach could have a serious impact on national security. Consequently, when it comes to sharing any kind of document, these teams cannot risk threats slipping through the net. Human attackers are now using machines to engineer malware at a pace only imaginable a few years ago. Today, it’s possible to engineer a new piece of malware and to make each version of that file suitably different so that it’s almost impossible for traditional malware protection solutions to identify. In the same way that Facebook or Twitter use algorithms to create a truly unique social feed of information that is tailored to the interests and tastes of a user, bad actors can use similar algorithms to deploy essentially the same underlying threats but packaged in ways that simply evade detection.

Gartner advises tech leaders to prepare for action as quantum computing spreads

Cambridge Quantum’s efforts to expand quantum infrastructure got significant backing earlier this year when Honeywell said it would merge its own quantum computing operations with Cambridge Quantum, to form an independent company to pursue cybersecurity, drug discovery, optimization, material science, and other applications, including AI. Honeywell said it would invest between $270 million – $300 million in the new operation. Cambridge Quantum said it would remain independent, working with various quantum computing players, including IBM. The lambeq work is part of an overall AI project that is the longest-term project among the efforts at Cambridge Quantum, said Ilyas Khan, founder, and CEO of Cambridge Quantum, in an e-mail interview. “We might be pleasantly surprised in terms of timelines, but we believe that NLP is right at the heart of AI more generally and therefore something that will really come to the fore as quantum computers scale,” he said. Khan cited cybersecurity and quantum chemistry as the most advanced application areas in Cambridge Quantum’s estimation.


How to Not Lose Your Job to Low-Code Software

The amount of work you have is driven by the ability of software to make a meaningful difference in your organization. Take a look at your current queue of work. If your team is like most IT teams there will be a mountain of unmet demand for new applications or additional functionality for existing applications. Thinking that any amount of automation will reduce that demand to zero is like thinking that a faster car will get you to Mars. If low code software starts taking some of your work, there will likely be other projects you can work on. If you handle this right, you can even shuffle some of the painful projects over to the party-goers on the low code bus. ... Secondly, and more fundamentally, there are certain aspects of software engineering that are harder to automate than others - making it unsuitable terrain for the low code party bus to drive across. For example, low code tools make it easy for non-developers to create a table to store data. But they can't do much to help the non-developer structure their tables to best map to the business problem they are trying to solve. 


API contract testing with Joi

When you sign a contract, you expect both parties to hold their end of the bargain. The same can be true for testing applications. Contract testing is a way to make sure that services can communicate with each other and that the data shared between the services is consistent with a specified set of rules. In this post, I will guide you through using Joi as a library to create API contracts for services consuming an API. ... Before we get started, let me give you some background about contract testing. This kind of testing provides confidence that different services work when they are required to. Imagine that an organization has multiple payment services that utilize an Authentication API. The API logs in users into an application with a username and a password. It then assigns them an access token when the log-in operation is successful. Other services like Loans and Repayments require the Authentication API service once users are logged in. ... Contract tests are designed to monitor the state of an application and notify testers when there is an unexpected result. Contract tests are most effective when they are used by a tool that relies on the stability of other services. 


Regulating Crypto: Is It Different – Or Is It the Same?

Regulators need to know what the technology is capable of, but they need not know every technical detail just to make good law. “If you can understand clearly what the technology is doing, I think that you can make pretty good judgments about what the fundamental financial activity is and what regulatory box that financial activity can or should fit in,” he told Webster. Strip those technologies down a bit, and they boil down to some basic underpinning concepts that lend themselves to governance. At the core of blockchain and cryptos is database architecture, said Gerety. “It has some neat properties, but nowhere else in the financial services industry do you get regulated differently if you use SAP or Oracle,” he said. To get a sense of how one might approach “newness” in a sector, he offered a concept of a matrix, with axes denoting what the future “feels like” and might actually “be.” Babies will pretty much always “be” and “feel” the same. Not much in the way of technology will change the experience or feelings one will have with birthing and raising a child, despite the newness of, well, becoming a parent.


Information Theory: Principles and Apostasy

Let’s start with a data science interview question. Usually, as part of an initial screening round for entry level candidate I like to find an example on their CV of a project that used real life data. Real life data is much nastier than academic and research data. Its chalked full of missing data, mixed (integer and string) data and outliers that make consuming and modeling the information grossly more difficult. Invariably most of the conversation revolves around these real world considerations. How do you handle missing data? Usual answers involve some sort of information replacement strategy like replace them with the average value of the column. Fair and reasonable. How do we deal with malformed or mixed data? Again usually a fair answer involving mapping strings to numbers. Finally what did you do about the large outlier events? Usually the answer is that they ‘removed them’ because you ‘can’t be expected to predict rare events.’ The ultimate justification: it improved the models accuracy. That’s good answer if building a forecast is a game or contest, much worse if you want to use it.


The OCC Officially Recognizes the Critical and Permanent Role of Blockchain in Banking

This is noteworthy for a couple of reasons. First, it is a recognition that many banks, along with a slew of other financial institutions, are adopting DLT as a technology enabling better processes. Simply put, financial institutions are moving past the exploratory phase of DLT and are now actually implementing the technology into their operations. Secondly, the OCC is declaring its intent to explore and define appropriate governance processes for banks to deploy when such changes are implemented. In other words, the OCC is defining its intent to regulate how such changes should take place. ... The immutability of a distributed ledger provides a new level of security. It is challenging to establish a single customer view across different jurisdictions and business lines. With mutualized data management, DLT allows permitted parties to share data securely and in real time, which could address challenges of Know Your Customer (KYC) and Anti Money Laundering (AML). The themes are clear – DLT injected into the banking and financial ecosystem is an equalizer, a simplifier and a fortifier.


How data drives Air Canada’s cargo business

For business intelligence, the airline has been a long-term user of WebFocus from Tibco. It also uses Microsoft PowerBI. Riboulet’s reason for using two BI platforms is because “they complement each other”, each having different functions it finds useful. For example, WebFocus offers Air Canada the ability to push out reports via email, a feature not available in PowerBI. Riboulet says this is useful for people working in operations, who may only have access to their phone and need to see embedded reports. Also, the data team noticed that many business users require similar datasets and attributes, which can be pulled together into pre-built reports. The company also uses the data grid feature in WebFocus to aggregate data in a way that can easily be customised by users and can be exported to Microsoft Excel. It has also deployed WebFocus Hyperstage, as a staging area for data, to avoid direct access to its on-premise database systems. Riboulet views the data team at Air Canada Cargo as internal consultants who discuss data requirements with businesspeople. 


How Much Power Should Finance Have Over Their Automations?

If you want to automate your finance function and bring lower costs to operate the financing and accounting needs, taking control can provide you with numerous benefits. This includes prioritization of your processes that align with your strategic vision, controlling resource investments and commitments, and insuring SOX control frameworks are adhered to at the onset. It’s not surprising that some finance organizations can feel underserved by their IT partners, as ITs responsible for supporting the whole organization and finance operations can take a back seat to other priorities. This does not mean that IT should be left aside. IT will have a role, even if you run your own automation program end-to-end, and you will need them to have a seat at the table. You will want to avoid creating a shadow IT group and truly focus your financial resources on process improvement and automation. It’s best practice to leverage your IT team for infrastructure, network security, understanding ERP/system schedules, roadmaps, and disaster recovery processes (at a minimum). It is also recommended to adopt the cloud version of the tools, which can significantly reduce the needs of your IT org



Quote for the day:

"Problem-solving leaders have one thing in common: a faith that there's always a better way." -- Gerald M. Weinberg

Daily Tech Digest - October 23, 2021

How Artificial Intelligence is Changing DevOps?

AI automatic testing tools can generate tests automatically, and that too with little to no code at all, so developers don’t have to worry about writing test codes. The AI is evolved enough to automatically generate tests by learning the app flows, screens, and elements that require little to no human involvement. The automation tools are so well built and perform automated audits or checks so frequently that there are almost no instances of errors. They capture feedback at every instant and analyze the input and identify the errors in real-time. The intelligence of the tools allows the developers or the team members to reduce participation in test automation creation activities and free up their time to focus on more important and urgent tasks. And eventually, develop a more productive system for the organization. AI is proficient in handling big data with minimal human involvement. For DevOps, this means that the huge data sets can now be managed with minimal effort. Since DevOps involves and impacts three functions of an organization simultaneously, it also has tons of data to be managed and maintained on an everyday basis.


How low-code/no-code solutions and automation can triage employee turnover

We are continuing to see AI getting better but it needs to be applied in the right places. For example, teams can automate more of their processes and manual tasks, improving workflows and reducing busywork for agents. Costs are reduced, and customer demand is more easily met, which has proven to lead to happier, more productive support teams. “Solving customer problems and leaving a customer happy is what gets an agent excited about getting up every day and going into the job,” Wolverton says. “We strongly believe getting all of that repetitive work and processes automated so that they can focus on the rewarding work is what’s going to keep their motivation high and keep them in your organization.” It’s also about letting customers help themselves, she adds — more and more, customers want their answer fast, and waiting on the phone for the next available agent isn’t going to cut it. If you can get people their answer quickly and accurately through a search or through a bot, and then only escalate when the issue becomes more complex and a human is uniquely qualified to handle the issue, you’re going to have far more satisfied customers.


Non-Traditional Cybersecurity Career Paths: Entering the Industry

“I’d never considered cyber or even information technology as a career growing up. My interests always piqued around history and physics. I in fact failed first-year engineering for having written an essay on David Hume when asked to discuss induction in engineering. I have an undergraduate degree with a double major in history & philosophy of science and quantum physics. I continued down this path, working in the university’s quantum computing department on the development of quantum circuitry. My work centered on the development of superconducting diamond[s], looking to test and establish the reality of theoretical models predicting room-temperature superconductivity. I believed in making Marty McFly’s future a reality; I was on the path to making superconducting circuitry with the sci-fi application of a hoverboard — although I still don’t believe it’d be able to hover across water. “One day while taking adult skiing lessons with an instructor (now my fiancé), I realized my skillsets weren’t technically focused but operational. I’d spent my theses developing, constructing and rebuilding processes.


Agile talent: How to revamp your people model to enable value through agility

When you cut through it, making the move to agile means you’re really going to be breaking the company down into self-sufficient, multidisciplinary, multidimensional teams. That’s the very essence of agile. However, it’s not all about structure. There are many barriers that must be removed to allow those teams to really work. Some barriers you don’t quite realize are there, and many other barriers don’t appear as barriers today but do appear as barriers going forward. So if you do move the organization to agile, be prepared to drive through a number of the barriers. Because you only really get the true benefit that lies in agile if you’re prepared to put those to the stake. I have talked to many organizations interested in the transition to agile, and in the early conversations the focus is understandably always on the organization’s structure. Having “seen the movie,” and helped many companies in the making of their movie, if I had $100 to spend on agile, I’d put only $10 to $15 against organizational structure. All of the rest I would invest in agile ceremonies and processes, particularly in the people processes.


What Are Low-Code/No-Code Platforms?

Low-code/no-code platforms and capabilities are now being provided by a wide range of providers including startups trying to fill various niches in the technology all the way up to the large enterprise products and services companies. We have covered the low-code/no-code options that are available with Microsoft, Google and Amazon previously. While there is plenty of crossover ability to connect to the other companies’ products and services, Amazon is the only one that lacks any ability to tie into data that might be hosted on the other two low-code/no-code platforms. Choosing a low-code/no-code platform will likely be impacted by where an organization has its data located. Just like other services offered by these big three companies, it is much easier to work within the same ecosystem rather than mixing and matching across low-code/no-code tools. Once that decision is made, the work of building out those first low-code tools for an organization should be fairly straightforward. Low-code/no-code development intentionally targets knowledge workers who have familiarity with the processes and workflows within their business unit, department or division but do not necessarily have any coding experience.


PostgreSQL v14 Is Faster, and Friendly to Developers

This release also brings more features to parallel query execution, in which PostgreSQL can devise query plans that can leverage multiple CPUs to answer queries faster. Now your database can execute queries in parallel for RETURN QUERY and REFRESH MATERIALIZED VIEW. More prominent updates include pipeline mode for LibPQ, which is the interface that developers use to connect their application to the database. With PostgreSQL, they now have the ability to use a pipeline mode. LibPQ used to be single-threaded, where it would wait for one query to complete execution before sending the next one to the database. Now devs can feed multi-transactions into the pipeline and LibPQ will execute them turn by turn to feedback results into the application. The application no longer has to wait for the first transaction to complete to execute the next one. This was one of the updates in which Shahid commented, “Why did we not think about this earlier? This is such a no-brainer! But that’s how technology progresses.” Another potential no-brainer-in-hindsight is an upgrade to TOAST, which now allows for LZ4 compression. TOAST is a system that allows the storage of much larger data. 


Encouraging STEM uptake: why plugging the skills gap starts at school

Part of the challenge for businesses has been that leaders and recruiters still use assumptions about the value of certain backgrounds and degrees as the basis for their hiring strategy. This has been a particular issue in the technology industry where a formal ‘technical background’ has long been viewed as a minimum requirement to get on the career ladder. In some forward-thinking companies, however, there is more value now being placed on soft skills, such as creativity, persuasion and collaboration. These companies also recognise that employees can build specialist technical skills via routes such as internships, apprenticeships or on the job training. To play a full role in building the STEM workforce, businesses should also offer wider support to organisations that are working to ensure equal opportunities for girls and women. Code First Girls is one of a growing number of organisations that support young adult and working age women, in their case, “to become kick-ass developers and future leaders.” Businesses that are committed to equality of opportunity in their technical teams can help promote inclusion and tap into the female talent pool by working with these like-minded organisations.


Simplifying the complex: Introducing Privacy Management for Microsoft 365

Staying ahead of data privacy regulations and understanding the technical actions you can take to address compliance can be daunting. To help, Microsoft Compliance Manager today has more than 200 regulatory assessment templates covering global, industrial, and regional Data Protection and Privacy regulations, making it easier for customers to interpret, assess, and improve their compliance with regulatory requirements. We recently added three privacy-specific assessments for Colorado Privacy Act, Virginia Consumer Data Protection Act (CDPA), and Egypt Privacy Law. Additionally, we have mapped privacy-specific controls across these assessment templates to the new Privacy Management solution to help you scale your compliance efforts. You can learn more about Compliance Manager, our list of available assessments, and how to use the assessment in our documentation. You can also try the Compliance Manager 90-day trial, which gives you access to 25 assessments. Privacy is a journey 


Remote and hybrid work: 4 tips to ease onboarding

By their nature, hybrid or remote office environments encourage asynchronous collaboration, as not everyone will be online or in the office at the same time. To make asynchronous workflows more manageable, consider the following tips: Minimize context switching by muting unnecessary communication channels, not feeling the need to respond immediately, and using messaging apps like Slack asynchronously; Set up Slack channels for different languages so people can easily communicate with one another on their own time (this is particularly helpful if you’re working with developers from around the world); Use project management tools, such as Jira, which allow everyone to provide input into projects on their own time. These tools also help reduce Zoom fatigue while giving team members the chance to complete tasks irrespective of their time zones. Working in a remote or hybrid environment can be challenging for many teams. But these recommendations can help you reap significant benefits. You’ll have a chance to attract, retain, and get the most out of other talented developers and IT managers with unique perspectives and different backgrounds – and that will help everyone succeed.


Promoting Creativity in Software Development with the Kaizen Method

The Kaizen method creates continuous improvements by implementing constant positive changes. Over time, these small, gradual improvements can produce significant results. It has long been a key principle of lean manufacturing methods. In English, the word "kaizen" means change for the better (kai = change, zen = good). The philosophy was first introduced at Toyota in Japan after World War II. The car manufacturer formed quality circles — groups of workers who perform similar tasks — in its production process. The teams met regularly to identify and review work-related problems, analyze the situation, and offer improvement suggestions. ... By applying the Kaizen proactive model, SenecaGlobal recently initiated an innovative process to improve the billing rate for a key client by implementing agile methodologies and conducting regular risk assessments for delivery timelines. As part of discovery, the developers uncovered a way to eliminate the need for a third-party software solution to decrypt/encrypt credit card payments, which resulted in significant cost savings. 



Quote for the day:

"One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man." -- Elbert Hubbard

Daily Tech Digest - October 21, 2021

7 secrets of successful vendor negotiation

Intentionally withholding critical information is also a terrible tactic. “Vendors and prospects do this all the time, and it never works,” Plato notes. For example: not having the funds necessary to acquire and deploy a technology and expecting the vendor to somehow provide a solution. “It’s unfair to waste a salesperson’s time if you’re not ready to purchase,” Plato states. The reverse is also true for vendors, he notes. “Don’t tell a customer you can meet their expectations when you cannot,” IT negotiations aren’t all that much different from any other type of business bargaining, observes Dmitry Bagrov, managing director of software development firm DataArt UK. “All negotiations rely on basic principles that are universal, and one of the most basic and most often forgotten is that the contract should be profitable for both sides.” Squeezing a vendor for an unprofitable rate or any other unrealistic consideration will only result in an unhappy partner that may then look to increase its margin by supplying inflated estimates, inferior resources, and other types of corner-cutting. Bagrov cautions IT leaders not to fall for the old Hollywood bromide: “It’s not personal; it’s business.” 


New Microsoft Sysmon report in VirusTotal improves security

Whether you’re an IT professional or a developer, you’re probably already using Microsoft Sysinternals utilities to help you manage, troubleshoot, and diagnose your Windows systems and applications. The powerful logging capabilities of Sysinternals utilities became indispensable for defenders as well, enabling security analytics and advanced detections. The System Monitor (Sysmon) utility, which records detailed information on the system’s activities in the Windows event log, is often used by security products to identify malicious activity. The new behavior report in VirusTotal includes extraction of Microsoft Sysmon logs for Windows executables (EXE) on Windows 10, with very low latency, and with Windows 11 on the roadmap. This is the latest milestone in the long history of collaboration between Microsoft and VirusTotal. Microsoft 365 Defender uses VirusTotal reports as an accurate threat intelligence source, and VirusTotal uses detections from Microsoft Defender Antivirus as a primary source of detection in its arsenal. Microsoft Sysinternals Autoruns, Process Explorer, and Sigcheck tools integrate VirusTotal reports, and VirusTotal itself uses Sigcheck to report details on Windows portable executable files.


Top tips for growth and success as a developer

The niche role of developers and the specialisation of their skillsets can often lead to isolation. Individuals may not necessarily collaborate with others on the same project, leaving them unaware of how the whole project was completed from start to finish. In contrast, a more collaborative approach, where individuals are encouraged to share ideas and actively work together on tasks can have a multitude of benefits. Not only does it provide a greater understanding of the project management aspect of developer projects, but it allows developers to gain insight, through the expertise of others, into code they may never have written before. ... While skilling up on new technologies is always good, developing your “soft” skills is equally important for your future career prospects. Open source gives you the chance to progress a range of these skills, such as communication, teamwork, and problem-solving. Even the most skilled developers can benefit from open source, where they can learn new skills and form important peer networks.


Database Testing Made Simple, Efficient and Fast

If you involve a database in your Java test suite, make sure it’s a containerized one. The Testcontainers framework takes care of the simplicity requirement. It adds the much-needed abstraction layer around Docker to provision, start and tear down a container of your database during the test suite lifecycle. And it does it with minimum boiler plate, keeping your tests readable. ... An efficient suite of tests does not target the same functionality twice. However, to some degree it’s unavoidable that generic code is called multiple times. Imagine a simple query to fetch a user record. This will be invoked in multiple test scenarios. Throughout the entire test run it may be called fifty times whereas its functionality needs to be validated only once. This is wasteful. Imagine a test that validates the unhappy paths in the snippet below. We want to catch the proper exceptions for an unknown member, unknown movie, user too young and maximum number of rentals exceeded. Every subsequent scenario repeats more queries until it throws its expected exception. 


How to right-size edge storage

Edge data centers are generally small-scale facilities that have the same components as traditional data centers but are squeezed into a much smaller footprint. In terms of capacity, determining edge storage requirements is similar to estimating the storage needs of a traditional data center, however workloads can be difficult to predict, says Jason Shepherd, a vice president at distributed edge-computing startup Zededa. Edge-computing adopters also need to be aware of the cost of upgrading or expanding storage resources, which can be substantial given size and speed constraints. "This will change a bit over time as grid-based edge storage solutions evolve, because there will be more leeway to elastically scale storage in the field by pooling resources across discrete devices," Shepherd predicts. A more recent option for expanding edge-storage capacity independently from edge-compute capacity are computational storage-drive devices that feature transparent compression. They provide compute services within the storage system while not requiring any modifications to the existing storage I/O software stack or I/O interface protocols, such as NVMe or SATA.


Smartphone counterespionage for travelers

If you’re deemed a target worthy of espionage, the IMSI catcher may even be used to install malware on your device. Such malware can take complete control of your phone, granting spies access to the contents on it, the communications from it and even its cameras and microphones. IMSI catchers have been detected at airports throughout the world, including in the United States. But really, they can be located anywhere, including at chokepoints like train stations and shopping centers as well as in the vicinity of hotels typically frequented by foreign travelers. If you’re lucky enough to avoid an IMSI catcher, you can still be monitored by local intelligence through the cell network alone. This is especially true in countries where the cellular infrastructure is state-owned. At the very least, spies will have access to your real-time location and the metadata of your calls. As with IMSI catchers, the cell network can also be used to deliver malware to your device, typically through a malicious carrier update that happens behind the scenes. The end result is that if you’re traveling to a foreign country, especially one that’s hostile to your home country or known to engage in economic espionage, you have to assume that your smartphone will be compromised at some point.


DevOps: 3 skills needed to support its future in the enterprise

While the future looks promising for DevOps experts, much will depend on how DevOps engineers are leveraged to transform how work gets done. For instance, DevOps engineers must continually strive to break down silos while also moving away from traditional development, deployment, and waterfall builds that inhibit the velocity of scalable, qualitative, and reliable software. In a pandemic and post-pandemic world, organizations are modifying their operating plans and must deal with a distributed workforce. IT teams must also consider automation and unbundling previously existing complexities such as siloed development and operations teams. Everything-as-code, hybrid cloud operating models, and automated workflows will be top priorities for every DevOps team. Digital services must excel across all organizational functions in order to delight customers. Meanwhile, organizations will continue to focus on how to increase revenue while reducing costs. Experience, processes, effectiveness, utilization, quality, and speed are the levers for improvement.


CISA Leader Backs 24-Hour Timeline for Incident Reporting

Wales' support for a 24-hour timeline aligns with the Senate Select Intelligence Committee's Cyber Incident Notification Act of 2021 - sponsored by Sens. Mark Warner, D-Va., Marco Rubio, R-Fla., and Susan Collins, R-Maine. The bill would require federal agencies, federal contractors and organizations that are considered critical to U.S. national security to report security incidents to CISA within 24 hours of discovery. Per the bill, companies that do not report an incident within 24 hours could face a maximum financial penalty equal to 0.5% of the previous year's gross revenue. The measure, however, allows for exceptions to the penalty. Another provision would allow organizations to anonymize personal data when they report a breach - to encourage victims to report incidents without revealing sensitive data. Some cybersecurity experts have said that it's unrealistic to expect organizations to report incidents within 24 hours of discovery because they need more time to properly assess an attack and determine if it meets the criteria for notification.


The best approach to AI assistants and process automation for your business

For firms to harness the full potential of AI assistants and process automation, an effective approach is to consider how closely the two are intertwined. We’ve seen from experience that one of the most effective and logical methods of implementing AI and automation is to introduce digital assistants into their existing customer services, where they can be used to capture and create a log of conversations. Presently, many companies’ customer services are constrained by the availability of their employees to man phonelines or speak to customers in person, which can be a challenge outside of normal working hours. Digital assistants help to remove the customer services gap by offering a 24/7 solution with which consumers can share their questions and issues whenever they need to, safe in the knowledge that the enquiry will be logged and prioritised accordingly. This is not to suggest that digital assistants should be viewed as a replacement to human engagement with customers – a survey conducted by Dutch tech firm Usabilla found that 55% of people still like to speak with a human customer service agent on the phone.


Takeoff: What Software Development Can Learn from Aviation

As with pilots practicing how to react to an engine outage, we regularly practice how to react to a database outage. Once a month two of our engineers are randomly selected to run a database outage drill. We present them with the scenario that one of the databases on our staging system has crashed and needs to be restored from a backup. In this scenario they are the only people available and need to get the database up and running as soon as possible. We learned pretty quickly that these drills are enormously helpful. They give our people the confidence that if something like this actually happens, they won’t have to guess (or find some documentation on) what the next move could be, but can rely on their experience. It also greatly improved our documentation and tooling which apart from being helpful in an emergency, has given us a better overview of our system landscape. We can already see that when performing the drill for the second or third time, our engineers are a lot more relaxed. They know what to do and what to expect.



Quote for the day:

"When leaders are worthy of respect, the people are willing to work for them. When their virtue is worthy of admiration, their authority can be established." -- Huananzi