Daily Tech Digest - December 08, 2021

Entrepreneurship for Engineers: Making Open Source Pay

Things shift as the application is deployed and scaled. At that point, the fact that something is open source quickly becomes irrelevant. Instead, engineers — and business leaders — care about things like reliability and security. And they are willing to pay for it. If an open source project is geared primarily towards the “build” phase and is either less visible or less valuable at the deploy and scale phase, it will be hard to monetize, no matter how popular it is. Similarly, it’s always easier to monetize a project that provides a mission-critical capability, something that would directly impact users’ revenue if it didn’t work. A project that facilitates payments is going to be very easy to monetize, but a project that makes the fonts on a webpage particularly beautiful has an uphill battle. As an example, Fanelli pointed to Temporal, an open source microservice orchestration platform started by the creator of the Cadence project, which was developed by Uber to ensure that jobs don’t fail and was used for things like ensuring that payments are processed.


The checklist before offering and accepting a job especially for IT Industry

Avoid using fail-fast strategy on employees: Sometimes organizations hire buffer candidates and if they don’t meet the expectations, they are asked to leave. Instead, candidates must be assessed thoroughly and made sure they are given enough time to perform; Short notice period: 4-6 weeks of notice period is a good time for knowledge transition. Also, during notice period, many employees are non-productive as they spend time to complete HR formalities; Right references: If candidates work with a company for longer duration, they build good references. There is no point having references if candidates have spent less than one year in a company; Faster onboarding: Most of time the onboarding process is very long with many steps involved. It starts with HR onboarding, followed by practice/BU and project team. It is good to show a collaborative approach while onboarding candidates. But it is also important to have quick discussions with relevant teams


Personal Data Protection Bill: 4 Reasons Why Governments Bat for Data Localisation

First, it makes the personal data of resident data principals vulnerable to foreign surveillance because arguably governments, in whose jurisdictions such servers are located, will have better access to the data. Second, storage and transference of personal data of resident data principals to jurisdictions with lax data protection laws also makes their data vulnerable. Third, it reduces the access of the domestic government of the data principals to this data thereby interfering with the discharge of their regulatory and law enforcement functions, including counter-terrorism and prevention of cyber attacks and cyber offences. This is because requests for such information are either denied citing law of the foreign country or its provisioning is often delayed given the inefficacious and time consuming MLAT (Mutual Legal Assistance Treaty) processes. Fourth, it leads to missed opportunities for the domestic industry that would otherwise be engaged in the provisioning of storage services in terms of foreign direct investment, creation of digital infrastructure and development of skilled personnel.


The DARL Language and its Online Fuzzy Logic Expert System Engine

DARL is an attempt to drag experts systems into the 21st century. DARL was initially created as a solution to a problem that still exists today in Machine Learning: how do you audit a trained Neural network? I.e. if you use Machine Learning to create a model that you use in a real world example, how do you ensure it doesn't accidentally do something bad, like identify the wrong person as a potential terrorist, or deny a loan to a minority group? Neural networks and other similar techniques produce models that are "black boxes". The answer the designer of DARL found was to use Fuzzy Logic Rules as the model representation mechanism. Algorithms exist to perform Supervised, Unsupervised and Reinforcement learning to these rules. DARL grew out of that. Initially, the models were coded in XML, but later a fully fledged language was created so that all the usual tools like editors, interpreters, etc. could be used with the models. The rules are very easy to understand, so auditing them for unexpected effects is simple.
 

CI/CD Is Still All About Open Source

Jenkins was a CI tool at heart and later morphed into a CI/CD tool. Many people think that this fork in the road may have hurt the continued evolution of continuous delivery in the long term. But that is an argument for another DevOps.com article (or maybe even a panel discussion at an upcoming DevOps live event). Regardless of where you stand on that issue, as an open source project, it is hard to argue with the success of Jenkins. Driving a lot of that success is the Jenkins plug-in architecture. There are literally thousands of plugins that allow Jenkins to work with just about anything. That is the engine that powered Jenkins, yes; but its secret superpower was and is open source. That said, Jenkins has grown a bit long in the tooth over the years. It’s not that it doesn’t do what it always did, it’s that what we do and how we do it has changed. Microservices, Kubernetes and even cloud have changed the very fabric of the tapestry in front of which Jenkins sits. The open source community that supports Jenkins should receive enormous credit here: It has tried mightily to keep up with the many changes.


The threats of modern application architecture are closer than they appear

Shift left approaches begin to yield vague and general results with the developer writing the first line of code, and vulnerabilities can be caught as early as possible. On the other hand, shift right aligns with where vulnerabilities are detected closer to the full deployment of the software, sometimes only in production runtime. Shifting toward the right is usually the easier approach, as it provides results that are more accurate and actionable, enabling developers to run the code and then find the mistakes, but it isn’t always the desirable choice, as many times the detection is simply too late. That means the fixes are harder, costlier, and in worst-case scenarios, your organization could already have been exposed to any given vulnerability. On the other hand, shift left enables developers to see the security testing results as early as possible, saving both time and money for IT teams in the long run. The key to conquering this tension is fostering a painless testing methodology that can be envisioned as “one platform to rule them all.”

 

The defensive power of diversity in cybersecurity

As with many things in technology, new disruptive ways of thinking are required to address the problem. There is a need to instill platforms, funding, policies and processes that diversify the talent pool in cybersecurity, opening it up to as wide a range of backgrounds as possible. Intelligence and law enforcement agencies are leading the way, keen to reclaim the edge from attackers. What started with the FBI grappling with whether to hire hackers who smoke cannabis in 2014 has turned into more formalized programs with open arms to diversity. Organizations such as GCHQ, the U.K.’s signals intelligence agency, are leading the way by actively hiring neurodiverse individuals for their unique ability to spot patterns in data. As with anything in cyber, what starts in intelligence agencies has a knack of achieving mainstream adoption with those defending large corporations. Those in cybersecurity need to recognize that diversity is about more than just equality. It is about optimizing defensive capabilities by having access to the widest possible range of problem-solving abilities.


How CEOs can pass the cybersecurity leadership test

The first order of business for CEOs is connecting the organization’s mission to the security of data, assets, and people. CEOs can do this by articulating an unambiguous foundational principle that establishes security and privacy as operational goals and business imperatives. Aflac, the largest provider of supplemental insurance at the workplace in the United States, has positioned cybersecurity at the center of who they are and what they do as a company. “We are one of the few insurance companies that measures ourselves on how fast we pay,” Aflac CISO Tim Callahan says. “Our operational managers are held to a standard of paying our claims fast. Dan Amos, our chairman and CEO, has never lost sight of who our customers are, and how much trust they have in us, and how we’re there for them during their time of need. That extends to protecting their information. He understands what the lack of cyber protection can do to our brand, to our customers, to our reputation. If the CEO were not passionate about that, then there’s a bigger problem.”


Why Cloud Native Systems Demand a Zero Trust Approach

In the past, when organizations relied on their own private, often on-premises, data centers — and workers usually came to a physical office to do their jobs — security experts considered data and workloads to have a definable “perimeter” that needed to be defended. Bad actors, human or machine, were denied access to the network the way invaders were repelled from a castle: by building a (virtual) moat around it. Hence the use of authentication and authorization via individual logins and passwords. The architects who designed these systems assumed entities inside an organization could be trusted, and that users’ identities were not compromised. But that castle-and-moat approach is widely considered to be unreliable today. Not only is there no single “castle” to defend — but chances are, there’s already someone or something in your castle that shouldn’t be there. A Zero Trust approach makes the assumption that, as the horror movie tagline goes, the call is coming from inside the house. It assumes that someone or something that shouldn’t be there may already be on your network.


How financial services companies are gaining value from cloud adoption

For cloud adoption to be successful, buy-in is required from the workforce and leadership. This is key to aligning tech investment and deployment with clear business goals, but a deep understanding of the strategic implication of cloud migration among C-suite and board members can sometimes be absent. Business leaders often believe it is the full responsibility of the CTO, but the discussion must go both ways, and therefore, there is a gap to be bridged between business and IT to ensure that both sides are on the same page.“It’s easy to forget that you need a case for change, and to overlook alignment of any staff member in charge of a team,” said Mould. “The leadership team also need to consider how they put the organisation across as an attractive place for talent to help them with the cloud migration. “The alternative is to outsource a capability that won’t be invested in internally, but a big part of this adoption is thinking differently about the brain drain, and look at creating an internal capability.”



Quote for the day: 

"Leaders should influence others in such a way that it builds people up, encourages and edifies them so they can duplicate this attitude in others." -- Bob Goshen

Daily Tech Digest - December 07, 2021

Why 2022 will be the year of data sovereignty cloud

Governments around the world are facing pressure to enact more comprehensive data privacy legislation, in response to increasing consumer concerns about how personal data and digital activity is being stored and used. It’s particularly notable when it comes to the cloud because a business can store its data in any number of different geographic regions regardless of where the company itself might be based – and if they’re using public cloud providers, they might not even know where their data is physically being stored. This is where questions of cloud data sovereignty – the concept that data stored in the cloud is subject to the laws and regulations of the country that has jurisdiction of the physical servers and premises being used – becomes far more relevant. The world of data protection had a big wake-up call when the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) were passed. These two landmark pieces of legislation aimed to bring some degree of consistency around the collection and use of personally identifiable information – for one of the world’s biggest trading blocs and the US’ most populous state respectively.


5 cybersecurity myths that are compromising your data

There are still two long-held misconceptions around passwords. The first is that adding capital letters, numbers or special characters to your one-word password will make it uncrackable. This myth is perpetuated by a lot of business accounts which have these requirements. However, the real measure of password security is length. Software can crack short passwords, no matter how "complex", in a matter of days. But the longer a password is, the more time it takes to crack. The recommendation is using a memorable phrase -- from a book or song, for example -- that doesn’t include special characters. But determining a strong, (almost certainly) uncrackable password is only the first step. If the service you’re using is hacked and criminals gain access to your password, you’re still vulnerable. That’s where two-factor authentication (2FA) and multi-factor authentication (MFA) come in. These methods require you to set up an extra verification step. When you log in, you’ll be prompted to enter a security code which will be sent to your phone or even accessed via a dedicated verification app.


How to protect air-gapped networks from malicious frameworks

Discovering and analyzing this type of framework poses unique challenges as sometimes there are multiple components that all have to be analyzed together in order to have the complete picture of how the attacks are really being carried out. Using the knowledge made public by more than 10 different organizations over the years, and some ad hoc analysis to clarify or confirm some technical details, researchers put the frameworks in perspective to see what history could teach cybersecurity professionals and, to a certain extent, even the wider public about improving air-gapped network security and our abilities to detect and mitigate future attacks. They have revisited each framework known to date, comparing them side by side in an exhaustive study that reveals several major similarities, even within those produced 15 years apart. “Unfortunately, threat groups have managed to find sneaky ways to target these systems. As air-gapping becomes more widespread, and organizations are integrating more innovative ways to protect their systems, cyber-attackers are equally honing their skills to identify new vulnerabilities to exploit,” says Alexis Dorais-Joncas


5 DevOps Concepts You Need to Know

Continuous Integration (CI) and Continuous Delivery (CD) are fundamental DevOps concepts. They enable developers to manage their work and merge their changes into a central repository (or version control system), and release continuously. If you go back to the core DevOps principles, it’s all about achieving the best collaboration, whether or not you’re working on the same functions of classes, triggers, layouts, etc. Think of your worst ‘version control’ nightmares dissipating because of CI/CD. But watch out for the major misconception that this is achieved purely from ‘tooling’. After all, you can’t buy tools and simply expect them to fix your problems – if you buy a drill, the shelves don’t go up on their own! First, you must understand the process (how to level the boards, where to use wall anchors, and so on.). In our developer world, it’s important to understand the tools and the processes that come along with it. Similarly, CI/CD tools won’t fix your problems if you don’t have the right process in place (such as a branch management strategy or environment strategy).


Are You Guilty of These 8 Network-Security Bad Practices?

With many people still working from home, the lines between work life and personal life have become blurred. Sometimes, it’s just easier to use a personal email account or computer for communicating with colleagues. But this can dramatically increase the risk of a phishing attack aimed at credential harvesting or malware distribution, which can turn your home computer or business laptop into a vector for malware infecting many other users—including work colleagues. Once in your company’s email server, it’s free to access critical data assets. ... Security-conscious companies wisely limit access to websites via the corporate network. But when working from home, all bets are off. So, your child might borrow your company laptop to visit a gaming or education site with weak security—or, worse yet, a malicious site that appears legitimate — potentially delivering malicious JavaScript which gains entry to your corporate network the next time you log in. The loosely collected cybercrime syndicate known as Magecart has elevated malicious JavaScript to an art, skimming credit-card information and login credentials from websites. 


All About ‘Bank Python,’ a Finance-Specific Language Fork

Bank Python implementations also seem to be using their own proprietary data structure for tables, offering faster access to medium-sized datasets (while storing them more efficiently in memory). “Some implementations are lumps of C++ (not atypical of financial software) and some are thin veneers over sqlite3,” Paterson said. (His friend Salim Fadhley, a London-based developer, has even released an (all-Python) version of the table data structure called eztable.) Paterson concludes that while most programming has a code-first approach, Bank Python would be characterized as data-first. While it’s ostensibly object-oriented, “you group the data into tables and then the code lives separately.” Needless to say, Bank Python inevitably ends up getting its own internal integrated development environment (IDE) to handle all of its unique configuration quirks, and it even has its own unique version-control system for code. Paterson acknowledged the uncharitable assessment that it’s all just a grand exercise in distrusting anything that originated outside the company.


TSA Issues New Cybersecurity Requirements for Rail Sector

TSA also released guidance recommending that lower-risk surface transportation owners and operators voluntarily implement the same measures. "We have not witnessed a rail industry event on the level of Colonial Pipeline, but a ransomware disruption, let alone a targeted attack, is a plausible scenario," says John Dickson, vice president of the cloud security firm Coalfire, which provides services to DHS and other federal agencies. He says that without "a regulatory nudge," the rail industry, particularly the freight portion, is not likely to improve its cybersecurity hygiene on its own. Other experts say TSA could get overwhelmed in reporting what they call noise. "At a high level, the directives seem completely reasonable, but as always, the devil is in the details," says Jake Williams, a former member of the NSA's elite hacking team. "Taken at face value, railway operators would have to report every piece of commodity malware that is discovered in the environment, even if antivirus or EDR prevented that malware from ever executing."


Russian Actors Behind SolarWinds Attack Hit Global Business & Government Targets

In at least one case, the attacker compromised a local VPN account, then used it to conduct recon and gain access to internal resources in the victim CSP's environment. This allowed them to compromise internal domain accounts. In another campaign, attackers were able to access a victim's Microsoft 365 environment using a stolen session token. It was later discovered some systems had been infected with info-stealer Cryptbot before the token was generated. Other techniques include the compromise of a Microsoft Azure AD account within a CSP's tenant in one attack; in another, attackers used RDP to pivot between systems that had limited Internet access. The attackers compromised privileged accounts and used SMB, remote WMI, remote scheduled tasks registration, and PowerShell to execute commands in target networks. Attackers are also making use of a new bespoke downloader dubbed Ceeloader, which decrypts a shellcode payload to execute in memory on a target device.


Automation strategy: 6 key elements

Ad hoc automation tends to occur independently of other efforts. Even if it solves a problem at hand, there are unclear (if any) links to how that aligns with broader goals. While that might be fine to some extent, it can also breed silos, cultural resistance, and other potential issues. Strategic automation can be both incremental and well-connected to the big picture. “While there are many questions a CIO will have along the way when deciding their automation strategy, the single most important question they should ask themselves is: ‘How will automation help my organization achieve the business outcomes we need to get to where we want to be in 4-5 years?’” Becky Trevino, VP of operations at Snow Software told us. Trevino notes that a “yes-no” matrix can help guide decision-making and prioritization, as in: “Does automating this help us achieve X?” If the answer is yes, then you do it. If the answer is no or maybe, then you should at minimum be asking deeper questions about why you’re doing it.


How consumers will see banks embrace AI in 2022

What does “genuinely personalised” banking look like? To answer that, we should compare these challenger banks with “business as usual” in the sector. Currently, most traditional banks still treat their online accounts as a digital version of a traditional balance statement. The odds are that your bank’s online account only provides a simple, itemised list of your ingoings and outgoings. If you want to calculate how much you spend, how you allocate that spending, set a realistic budget for next month, or estimate how much you might be able to save in an average month, it’s often the case that you simply will have to trawl through your statement yourself and do the hard calculations. Want to easily see how much goes out on your subscription services or other automatic charges versus incidental spending, and perhaps manage some of those financial commitments? The data is all there, but has often yet to be transformed into easy-to-understand interfaces that can help consumers or small business owners get their finances under control. This ends up being burdensome for people. And it’s also quite unnecessary.



Quote for the day:

"Leadership happens at every level of the organization and no one can shirk from this responsibility." -- Jerry Junkins

Daily Tech Digest - December 06, 2021

Why Qualcomm believes its new always-on camera for phones isn’t a security risk

Judd Heap, VP of Product Management at Qualcomm’s Camera, Computer Vision and Video departments, told TechRadar, “The always-on aspect is frankly going to scare some people so we wanted to do this responsibly. “The low power aspect where the camera is always looking for a face happens without ever leaving the Sensing Hub. All of the AI and the image processing is done in that block, and that data is not even exportable to DRAM. “We took great care to make sure that no-one can grab that data and so someone can’t watch you through your phone.” This means the data from the always-on camera won’t be usable by other apps on your phone or sent to the cloud. It should stick in this one area of the phone’s chipset - that’s what Heap is referring to as the Sensing Hub - for detecting your face. Heap continues, “We added this specific hardware to the Sensing Hub as we believe it’s the next step in the always-on body of functions that need to be on the chip. We’re already listening, so we thought the camera would be the next logical step.”


The HaloDoc Chaos Engineering Journey

The platform is composed of several microservices hosted across hybrid infrastructure elements, mainly on a managed Kubernetes cloud, with an intricately designed communication framework. We also leverage AWS cloud services such as RDS, Lambda and S3, and consume a significant suite of open source tooling, especially from the Cloud Native Computing Foundation landscape, to support the core services. As the architect and manager of site reliability engineering (SRE) at HaloDoc, ensuring smooth functioning of these services is my core responsibility. In this post, I’d like to provide a quick snapshot of why and how we use chaos engineering as one of the means to maintain resilience. While operating a platform of such scale and churn (newer services are onboarded quite frequently), one is bound to encounter some jittery situations. We had a few incidents with newly added services going down that, despite being immediately mitigated, caused concern for our team. In a system with the kind of dependencies we had, it was necessary to test and measure service availability across a host of failure scenarios.


Zero trust, cloud security pushing CISA to rethink its approach to cyber services

“When agencies hear the IG say something about how things are going with FISMA, they really pay attention. If we’re in a position to help influence that in a positive way, it’s absolutely critical that we do so,” he said. “We’ve got to pare down what we’re spending on IT and really focus on those things that matter. We have to adjust to a risk management approach in terms of how we apply architecture and capabilities across the enterprise to support the varying degrees of risk that we can absorb or manage within the within a given agency network. That’s like a huge part of what we need to continue to advocate for. But, to me, that is a significant element of the culture shift that needs to happen.” One way CISA is going to drive some of the culture and technology changes to help agencies achieve a zero trust environment is through the continuous diagnostics and mitigation program. CISA released a request for information for endpoint detection and response capabilities in October that vendors under the CDM program will implement for agencies.


DeFi’s Decentralization Is an Illusion: BIS Quarterly Review

“The decentralised nature of DeFi raises the question of how to implement any policy provisions,” the report said. “We argue that full decentralisation in DeFi is an illusion.” One element that could break this illusion is DeFi’s governance tokens, which are cryptocurrencies that represent voting power in decentralized systems, according to the report. Governance-token holders can influence a DeFi project by voting on proposals or changes to the governance system. These governing bodies are called decentralised autonomous organizations (DAO) and each one can oversee multiple DeFi projects. “This element of centralisation can serve as the basis for recognising DeFi platforms as legal entities similar to corporations,” the report said. It gave an example of how DAOs can register as limited liability companies in the state of Wyoming. “These groups, and the governance protocols on which their interactions are based, are the natural entry points for policymakers,” the report said. During Monday’s briefing, Shin explained that there are three areas regulators could address through these centralized organizational bodies.


This New Ultra-Compact Camera Is The Size of a Grain of Salt And Takes Stunning Photos

Using a technology known as a metasurface, which is covered with 1.6 million cylindrical posts, the camera is able to capture full-color photos that are as good as images snapped by conventional lenses some half a million times bigger than this particular camera. And the super-small contraption has the potential to be helpful in a whole range of scenarios, from helping miniature soft robots explore the world, to giving experts a better idea of what's going on deep inside the human body. "It's been a challenge to design and configure these little microstructures to do what you want," says computer scientist Ethan Tseng from Princeton University in New Jersey. ... One of the camera's special tricks is the way it combines hardware with computational processing to improve the captured image: Signal processing algorithms use machine learning techniques to reduce blur and other distortions that otherwise occur with cameras this size. The camera effectively uses software to improve its vision.


Top Internet of Things (IoT) Trends for 2022: The Future of IoT

Hyperconnectivity and ultra-low latency are necessary to power successful IoT solutions. 5G is the connectivity that will make more widespread IoT access possible. Currently, cellular companies and other enterprises are working to make 5G technology available in their areas to support further IoT development. Bjorn Andersson, senior director of global IoT marketing at Hitachi Vantara, an IT service management and top-performing IoT company, explained why the next wave of wider 5G access will make all the difference for new IoT use cases and efficiencies. “With commercial 5G networks already live worldwide, the next wave of 5G expansion will allow organizations to digitalize with more mobility, flexibility, reliability, and security,” Andersson said. “Manufacturing plants today must often hardwire all their machines, as Wi-Fi lacks the necessary reliability, bandwidth, or security. “5G delivers the best of two worlds: the flexibility of wireless with the reliability, performance, and security of wires. 5G is creating a tipping point. 


Zero Trust: Time to Get Rid of Your VPN

OAuth and OpenID Connect (OIDC) are standards that enable a token-based architecture, a pattern that fits exceptionally well with a ZTA. In fact, you could argue that zero trust architecture is a token-based architecture. So, how does a token-based architecture work? First, it determines who the user is or what system or service is requesting access. Then, it issues an access token. The token itself will contain different claims, depending on the resource that is being requested as well as contextual information. The claims given in the token can, for example, be determined by a policy engine such as Open Policy Agent (OPA). A policy describes the allowed access and which claims are needed to access certain resources. In the context of the access request, the token service can issue a token with appropriate claims based on that defined policy. Resources that are being accessed need to verify the identity. In modern architectures, this is typically some type of API. When the request to the API is received, the API validates the access token sent with the request. 


Breaking Up a Monolithic Database with Kong

The RESTful API software style provides an easy manner for client applications to gain access to the resources (data) they need to meet business needs. In fact, it did not take long for Javascript-based frameworks like Angular, React, and Vue to rely on RESTful APIs and lead the market for web-based applications. This pattern of RESTful service APIs and frontend Javascript frameworks sparked a desire for many organizations to fund projects migrating away from monolithic or outdated applications. The RESTful API pattern also provided a much-needed boost in the technology economy which was still recovering from the impact of the Great Recession. ... My recommended approach is to isolate a given microservice with a dedicated database. This allows the count and size of the related components to match user demand while avoiding additional costs for elements that do not have the same levels of demand. Database administrators are quick to defend the single-database design by noting the benefits that constraints and relationships can provide when all of the elements of the application reside in a single database.


Securing identities for the digital supply chain

As the world becomes more connected, governing and securing digital certificates is a business essential. As certificates’ lifespans continue to shrink, enterprises need to deploy ever more into their digital infrastructure. With greater numbers of certificates entering an organisations’ cyber space, there is more room for dangerous expirations to go unnoticed. From business-ending outages to crippling cyber attacks, the potential downside to bad management of this vital utility is huge. Unfortunately, digital certificates are still woefully mismanaged by businesses and governments world-wide. The volume of certificates being used to secure digital identities is growing exponentially, and businesses are faced with new management challenges that can’t be solved with legacy certificate automation models or outdated on-premises solutions. ... Today’s digital-first enterprise requires a modern approach to managing the exponential growth of certificates, regardless of the issuing certificate authority (CA), and one built to work within today’s complex zero trust IT infrastructure.


Lightweight External Business Rules

Traditional rule engines that enable Domain-experts to author rule sets and behaviors outside the codebase, are highly useful for a complex and large business landscape. But for smaller and less complex systems, they often turn out to be overkill and remain underutilised given the recurring cost of an on-premises or Cloud infrastructure they run on, License cost, etc. For a small team, adding any component requiring an additional skill set is a waste of its bandwidth. Some of the commercial rule engines have steep learning curves. In this article, we attempt to illustrate how we succeeded in maintaining rules outside source code to execute a medium scale system running on Java tech-stack like Spring Boot, making it easier for other users to customize these rules. This approach is suitable for a team that cannot afford a dedicated rule engine, its infrastructure, maintenance , recurring cost etc. and its domain experts have a foundation of Software or people within the team wear multiple hats.



Quote for the day:

"Coaching is unlocking a person's potential to maximize their own performance. It is helping them to learn rather than teaching them." -- John Whitmore

Daily Tech Digest - December 05, 2021

How Data Scientists Can Improve Their Coding Skills

Learning is incremental by nature and builds upon what we already know. Learning should not be drastically distant from our existing knowledge graph, which makes self-reflection increasingly important. ... After reflecting on what we have learned, the next step is to teach others with no prior exposure to the content. If we truly understand it, we can break the concept into multiple digestible modules and make it easier to understand. Teaching takes place in different forms. It could be a tutorial, a technical blog, a LinkedIn post, a YouTube video, etc. I’ve been writing long-form technical blogs on Medium and shorter-form Data Science primers on LinkedIn for a while. In addition, I’m experimenting with YouTube videos, which provide a great supplementary channel to learning. Without these two ingredients, my Data Science journey would have been more bumpy and challenging. Honestly, all of my aha moments come after extensive reflection and teaching, which is my biggest motivation to be active on multiple platforms.


5 Dashboard Design Best Practices

From a design perspective, anything that doesn’t convey useful information should be removed. Things that don’t add value like chart grids or decorations are prime examples. This can also include things that look cool but don’t really add anything to the dashboard like a gauge chart where a simple number value gives the user the same information while taking up less space. If you are conflicted, you should probably err on the side of caution and remove something if it doesn’t add any functional value. Space is a prized dashboard commodity, so you don’t want to waste any space on things that are just there to look pretty. Using proportion and relative sizing to display differences in data is another way to make data easier for viewers to quickly understand. Things like bubble charts, area charts or Sankey diagrams can be used to visually show differences that can be understood with a glance. The purpose of a dashboard is to convey information efficiently so users can make better decisions. This means you shouldn’t try to mislead people or steer them toward a certain decision.


From The Great Resignation To The Great Migration

Much has been written about The Great Resignation, the trend for over 3.4% of the US workforce to leave their jobs every month. Yes, the trend is real: companies like Amazon are losing more than a third of their workers each year, forcing employers to ramp up hiring like we have never seen before. But while we often blame the massive quit rate on the Pandemic, let me suggest that something else is going on. This is a massive and possibly irreversible trend: that of giving workers a new sense of mobility they’ve never had before. Consider a few simple facts. Today more than 45% of employees now work remotely (25% full time), which means changing jobs is a simple as getting a new email address. Only 11% of companies offer formal career programs for employees, so in many cases, the only opportunity to grow is by leaving. And wages, benefits, and working conditions are all a “work in process.” Today US companies spend 32% of their entire payroll on benefits and most are totally redesigning them to improve healthcare, flexibility, and education.


How Much Has Quantum Computing Actually Advanced?

Everyone's working hard to build a quantum computer. And it's great that there are all these systems people are working on. There's real progress. But if you go back to one of the points of the quantum supremacy experiment—and something I've been talking about for a few years now—one of the key requirements is gate errors. I think gate errors are way more important than the number of qubits at this time. It's nice to show that you can make a lot of qubits, but if you don't make them well enough, it's less clear what the advance is. In the long run, if you want to do a complex quantum computation, say with error correction, you need way below 1% gate errors. So it's great that people are building larger systems, but it would be even more important to see data on how well the qubits are working. In this regard, I am impressed with the group in China who reproduced the quantum supremacy results, where they show that they can operate their system well with low errors.


How Banks Can Bridge The Data Sharing Privacy Gap

Consent management rules regarding online advertising data collection may be tightening in numerous European Union markets. The Belgian Data Authority recently alleged that online advertising trade organization IAB Europe’s Transparency and Consent Framework (TFC) breaches the EU’s General Data Protection Regulation (GDPR). Statements from the Irish Council for Civil Liberties (ICCL), one of the legal coordinators on the case, also alleged IAB Europe was aware its consent popups violated GDPR. The case highlights why EU entities must pay careful attention to how consent management standards are changing to ensure they remain compliant. Experts also predict that GDPR regulatory oversight surrounding consent management will increase in 2022, meaning organizations must carefully look at how they structure consent boxes and other forms provided to customers. It is also becoming increasingly important for consumers to understand what data they share and which entities may access their information. 


ECB Paper Marks Success Factors for CBDCs, Digital Euro

The first one is ‘merchant acceptance’ which has to be wide, meaning users should be able to pay digitally anywhere. Unlike paper cash, a digital currency is likely to come with fees for each transaction and require dedicated devices to process the payments. There are other differences as well, despite both forms of money having legal tender status. The ECB elaborates: ... The second success factor has been defined as ‘efficient distribution.’ The ECB officials quote a Eurosystem report, according to which a digital euro should be distributed by supervised intermediaries such as banks and regulated payment providers. To encourage the distribution of the central bank digital currency, incentives may be paid to supervised intermediaries. The document divides intermediary services into two categories: onboarding and funding services — which would include operations required to open, manage, and close a CBDC account — and payment services.


Let there be light: Ensuring visibility across the entire API lifecycle

When approaching API visibility, the first thing we have to recognize is that today's enterprises actively avoid managing all their APIs through one system. According to IBM's Tony Curcio, Director of Integration Engineering, many of his enterprise customers already work with hybrid architectures that leverage classic on-premise infrastructure while adopting SaaS and IaaS across various cloud vendors. These architectures aim to increase resilience and flexibility, but are well aware that it complicates centralization efforts' to: 'These architectures aim to increase resilience and flexibility, but at the cost of complicating centralization efforts In these organizations, it is imperative to have a centralized API location with deployment into each of these locations, to ensure greater visibility and better management of API-related business activities. The challenge for security teams is that there isn't one central place where all APIs are managed by the development team - and as time passes, that complexity is likely to only get worse.


DevOps for Quantum Computing

Like any other Azure environment, quantum workspaces and the classical environments can be automatically provisioned by deploying Azure Resource Manager templates. These JavaScript Object Notation (JSON) files contain definitions for the two target environments: The quantum environment contains all resources required for executing quantum jobs and storing input and output data: an Azure Quantum workspace connecting hardware providers and its associated Azure Storage account for storing job results after they are complete. This environment should be kept in its separate resource group. This allows separating the lifecycle of these resources from that of the classical resources;  The classical environment contains all other Azure resources you need for executing the classical software components. Types of resources are highly dependent on the selected compute model and the integration model. You would often recreate this environment with each deployment. You can store and version both templates in a code repository (for example, Azure Repos or GitHub repositories).


Is the UK government’s new IoT cybersecurity bill fit for purpose?

The bill outlines three key areas of minimum security standards. The first is a ban on universal default passwords — such as “password” or “admin” — which are often preset in a device’s factory settings and are easily guessable. The second will require manufacturers to provide a public point of contact to make it simpler for anyone to report a security vulnerability. And, the third is that IoT manufacturers will also have to keep customers updated about the minimum amount of time a product will receive vital security updates. This new cybersecurity regime will be overseen by an as-yet-undesignated regulator, that will have the power to levy GDPR-style penalties; companies that fail to comply with PSTI could be fined £10 million or 4% of their annual revenue, as well as up to £20,000 a day in the case of an ongoing contravention. On the face of it, the PSTI bill sounds like a step in the right direction, and the ban on default passwords especially has been widely commended by the cybersecurity industry as a “common sense” measure.


Werner Vogel’s 6 Rules for Good API Design

Once an API is created, it should never be deleted, or changed. “Once you put an API out there, businesses will build on top of it,” Vogels said, adding that changing the API will basically break their businesses. Backward capability is a must. This is not to say you can’t modify, or improve the API. But whatever changes you make shouldn’t alter the API such that calls coming in from the previous versions won’t be affected. As an example, AWS has enhanced its Simple Storage Service (S3) in multiple ways since its launch in 2006, but the first-generation APIs are still supported. The way to design the APIs is to not start with what the engineers think would make for a good API. Instead, figure out what your users need from the API first, and then “work backwards from their use cases. And then come up with a minimal and simplest form of API that you can actually offer,” Vogels said. As an example, Vogels described an advertisement system that can be used for multiple campaigns.



Quote for the day:

"Leaders are visionaries with a poorly developed sense of fear and no concept of the odds against them." -- Robert Jarvik

Daily Tech Digest - December 04, 2021

Universal Stablecoins, the End of Cash and CBDCs: 5 Predictions for the Future of Money

Many of the features that decentralized finance, or DeFi, brings to the table will be copied by regular finance in the future. For instance, there’s no reason that regular finance can’t copy the automaticity and programmability that DeFi offers, without bothering with the blockchain part. Even as regular finance copies the useful bits from DeFi, DeFi will emulate regular finance by pulling itself into the same regulatory framework. That is, DeFi tools will become compliant with anti-money laundering/know your customer (AML/KYC) rules, Securities and Exchange Commission-registered or licensed with the Office of the Comptroller of the Currency (OCC). And not necessarily because they are forced to do so. (It’s hard to force a truly decentralized protocol to do anything.) Tools will comply voluntarily. Most of the world’s capital is licit capital. Licit capital wants to be on regulated venues, not illegal ones. To capture this capital, DeFi has no choice but to get compliant. The upshot is that over time DeFi and traditional finance (TradFi) will blur together. 


10 Rules for Better Cloud Security

Security in the cloud is following a pattern known as the shared responsibility model, which states that the provider is only responsible for security ‘of’ the cloud, while customers are responsible for security ‘in’ the cloud. This essentially means that to operate in the cloud, you still need to take your share of work for secure configuration and management. The scope of your commitment can vary widely because it depends on the services you are using: if you’ve subscribed to an Infrastructure as a Service (IaaS) product, you are responsible for OS patches and updates. If you only require object storage, your responsibility scope will be limited to data loss prevention. Despite this great diversity, there are some guidelines that apply no matter what your situation is. And the reason for this is simply because all the cloud vulnerabilities are essentially reduced to one thing: misconfigurations. Cloud providers have put at your disposal powerful security tools, yet we know that they will fail at some point. People make mistakes, and misconfigurations are easy. 


Unit testing vs integration testing

Tests need to run to be effective. One of the great advantages of automated tests is that they can run unattended. Automating tests in CI/CD pipelines is considered a best practice, if not mandatory according to most DevOps principles. There are multiple stages when the system can and should trigger tests. First, tests should run when someone pushes code to one of the main branches. This situation may be part of a pull request. In any case, you need to protect the actual merging of code into main branches to make sure that all tests pass before code is merged. Set up CD tooling so code changes deploy only when all tests have passed. This setup can apply to any environment or just to the production environment. This failsafe is crucial to avoid shipping quick fixes for issues without properly checking for side effects. While the additional check may slow you down a bit, it is usually worth the extra time. You may also want to run tests periodically against resources in production, or some other environment. This practice lets you know that everything is still up and running. Service monitoring is even more important to guard your production environment against unwanted disruptions.


Vulnerability Management | A Complete Guide and Best Practices

Managing vulnerabilities helps organizations avoid unauthorized access, illicit credential usage, and data breaches. This ongoing process starts with a vulnerability assessment. A vulnerability assessment identifies, classifies, and prioritizes flaws in an organization's digital assets, network infrastructure, and technology systems. Assessments are typically recurring and rely on scanners to identify vulnerabilities. Vulnerability scanners look for security weaknesses in an organization's network and systems. Vulnerability scanning can also identify issues such as system misconfigurations, improper file sharing, and outdated software. Most organizations first use vulnerability scanners to capture known flaws. Then, for more comprehensive vulnerability discovery, they use ethical hackers to find new, often high-risk or critical vulnerabilities. Organizations have access to several vulnerability management tools to help look for security gaps in their systems and networks.


How Web 3.0 is Going to Impact the Digital World?

The concept of a trustless network is not new. The exclusion of any so-called “trusted” third parties from any sort of virtual transactions or interactions has long been an in-demand ideology. Considering how data theft is a prominent concern among internet users worldwide, trusting third parties with our data doesn’t seem right. Trustless networks ensure that no intermediaries interfere in any online transactions or interactions. A close example of truthfulness is the uber-popular blockchain technology. Blockchain is mostly used in transactions involving cryptocurrencies. It defines a protocol as per which only the individuals participating in a transaction are connected in a peer-to-peer manner. No intermediary is involved. Social media enjoys immense popularity today. And understandably so, for it allows us to connect and interact with known ones and strangers sans any geographical limits. But firms that own social media platforms are few. And these few firms hold the information of millions of people. Sounds scary right? 


Is TypeScript the New JavaScript?

As a static language, TypeScript performs type checks upon compilation, flagging type errors and helping developers spot mistakes early on in development. Reducing errors when working with large codebases can save hours of development time. Clear and readable code is easy to maintain, even for newly onboarded developers. Because TypeScript calls for assigning types, the code instantly becomes easier to work with and understand. In essence, TypeScript code is self-documenting, allowing distributed teams to work much more efficiently. Teams don’t have to spend inordinate amounts of time familiarizing themselves with a project. TypeScript’s integration with editors also makes it much easier to validate the code thanks to context-aware suggestions. TypeScript can determine what methods and properties can be assigned to specific objects, and these suggestions tend to increase developer productivity. TypeScript is widely used to automate the deployment of infrastructure and CI/CD pipelines for backend and web applications. Moreover, the client part and the backend can be written in the same language—TypeScript.


4 signs you’re experiencing burnout, according to a cognitive scientist

One key sign of burnout is that you don’t have motivation to get any work done. You might not even have the motivation to want to come to work at all. Instead, you dread the thought of the work you have to do. You find yourself hating both the specific tasks you have to do at work, as well as the mission of the organization you’re working for. You just can’t generate enthusiasm about work at all. A second symptom is a lack of resilience. Resilience is your ability to get over a setback and get yourself back on course. It’s natural for a failure, bad news, or criticism to make you feel down temporarily. But, if you find yourself sad or angry for a few days because of something that happened at work, your level of resilience is low. When you’re feeling burned out, you also tend to have bad interactions with your colleagues and coworkers. You find it hard to resist saying something negative or mean. You can’t hide your negative feelings about things or people that can upset others. In this way, your negative feelings about work become self-fulfilling, because they actually create more unpleasant situations.


Spotting a Modern Business Crisis — Before It Strikes

Modern technologies such as more-efficient supply chain operations, the internet, and social media have not only increased the pace of change in business but have also drawn more attention to its impact on society. Fifty years ago, oversight of companies was largely the domain of regulatory agencies and specialized consumer groups. What the public knew was largely defined by what businesses were required to disclose. Today, however, public perception of businesses is affected by a diverse range of stakeholders — consumers, activists, local or national governments, nongovernmental organizations, international agencies, and religious, cultural, or scientific groups, among others. ... There are a few ways businesses can identify risks. One, externalize expertise through insurance and consulting companies that identify sociopolitical or climate risks. Two, hire the right talent for risk assessment. Three, rely on government agencies, media, industry-specific institutions, or business leaders’ own experience of risk perception. A fail-safe approach is to use all three mechanisms in tandem, if possible.


Today’s Most Vital Question: What is the Value of Your Data?

Data has latent value; that is, data has potential value that has not yet realized. And the possession of data in of itself provides zero economic value, and in fact, the possession of data has associated storage, management, security, and backup costs and potential regulatory and compliance liabilities. ... Data must be “activated” or put into use in order to convert that latent (potential) value of data into kinetic (realized) value. The key is getting the key business stakeholders to envision where and how to apply data (and analytics) to create new sources of customer, product, service, and operational value. The good news is that most organizations are very clear as to where and how they create value. ... The value of the organization’s data is tied directly to its ability to support quantifiable business outcomes or Use Cases. ... Many data management and data governance projects stall out because organizations lack a business-centric methodology for determining which of their data sources are the most valuable. 


Federal watchdog warns security of US infrastructure 'in jeopardy' without action

The report was released in conjunction with a hearing on securing the nation’s infrastructure held by the House Transportation and Infrastructure Committee on Thursday. Nick Marinos, the director of Information Technology and Cybersecurity at GAO, raised concerns in his testimony that the U.S. is “constantly operating behind the eight ball” on addressing cyber threats. “The reality is that it just takes one successful cyberattack to take down an organization, and each federal agency, as well as owners and operators of critical infrastructure, have to protect themselves against countless numbers of attacks, and so in order to do that, we need our federal government to be operating in the most strategic way possible,” Marinos testified to the committee. According to the report, GAO has made over 3,700 recommendations related to cybersecurity at the federal level since 2010, and around 900 of those recommendations have not been addressed. Marinos noted that 50 of the unaddressed concerns are related to critical infrastructure cybersecurity.



Quote for the day:

"Self-control is a critical leadership skill. Leaders generally are able to plan and work at a task over a longer time span than those they lead." -- Gerald Faust

Daily Tech Digest - December 03, 2021

IT threat evolution Q3 2021

Earlier this year, while investigating the rise of attacks against Exchange servers, we noticed a recurring cluster of activity that appeared in several distinct compromised networks. We attribute the activity to a previously unknown threat actor that we have called GhostEmperor. This cluster stood out because it used a formerly unknown Windows kernel mode rootkit that we dubbed Demodex; and a sophisticated multi-stage malware framework aimed at providing remote control over the attacked servers. The rootkit is used to hide the user mode malware’s artefacts from investigators and security solutions, while demonstrating an interesting loading scheme involving the kernel mode component of an open-source project named Cheat Engine to bypass the Windows Driver Signature Enforcement mechanism. ... The majority of GhostEmperor infections were deployed on public-facing servers, as many of the malicious artefacts were installed by the httpd.exe Apache server process, the w3wp.exe IIS Windows server process, or the oc4j.jar Oracle server process.


USB Devices the Common Denominator in All Attacks on Air-Gapped Systems

There have been numerous instances over the past several years where threat actors managed to bridge the air gap and access mission-critical systems and infrastructure. The Stuxnet attack on Iran — believed to have been led by US and Israeli cybersecurity teams — remains one of the most notable examples. In that campaign, operatives managed to insert a USB device containing the Stuxnet worm into a target Windows system, where it exploited a vulnerability (CVE-2010-2568) that triggered a chain of events that eventually resulted in numerous centrifuges at Iran's Natanz uranium enrichment facility being destroyed. Other frameworks that have been developed and used in attacks on air-gapped systems over the years include South Korean hacking group DarkHotel's Ramsay, China-based Mustang Panda's PlugX, the likely NSA-affiliated Equation Group's Fanny, and China-based Goblin Panda's USBCulprit. ESET analyzed these malware frameworks, and others that have not be specifically attributed to any group such as ProjectSauron and agent.btz.


How to do data science without big data

When you have visibility on the organizational strategy and the business problems to be solved, the next step is to finalize your analytics approach. Find out whether you need descriptive, diagnostic, or predictive analytics and how the insights will be used. This will clarify the data you should collect. If sourcing data is a challenge, phase out the collection process to allow for iterative progress with the analytics solution. For example, executives at a large computer manufacturer we worked with wanted to understand what drove customer satisfaction, so they set up a customer experience analytics program that started with direct feedback from the customer through voice-of-customer surveys. Descriptive insights presented as data stories helped improve the net promoter scores during the next survey. Over the next few quarters, they expanded their analytics to include social media feedback and competitor performance using sources such as Twitter, discussion forums, and double-blind market surveys. To analyze this data, they used advanced machine learning techniques.


Applying Social Leadership to Enhance Collaboration and Nurture Communities

Social leadership seems to differ as it is not a form of leadership that is granted, as is often the case in formal hierarchical environments. Organisations that have more “traditional management” structures and approaches tend to grant managers authority, accountabilities and power. Also, as I imagine you have seen, there has been much commentary over the years on the fact that management and leadership are not the same things. Some years ago when I was undertaking the Chartered Manager program with the Chartered Management Institute(CMI), I came across the definition that Management is “doing things right,” whereas leadership is “doing the right thing”. I find this succinct explanation of the difference refreshing and have continued to use this within my own coaching and mentoring work since. It feels to me that “doing the right thing” is the modus operandi of the social leader. Also, we talk a lot about the problems with accidental managers: those who have been promoted into managerial roles, often by having in the past been successful in their technical domains.


Report: APTs Adopting New Phishing Methods to Drop Payload

"When an RTF Remote Template Injection file is opened using Microsoft Word, the application will retrieve the resource from the specified URL before proceeding to display the lure content of the file. This technique is successful despite the inserted URL not being a valid document template file," Raggi says. Researchers demonstrated a process in which the RTF file was weaponized to retrieve the documentation page for RTF version from a URL at the time the file is opened. "The technique is also valid in the .rtf file extension format, however a message is displayed when opened in Word which indicates that the content of the specified URL is being downloaded and in some instances an error message is displayed in which the application specifies that an invalid document template was utilized prior to then displaying the lure content within the file," Raggi says. The weaponization part of the RTF file is made possible by creating or altering an existing RTF file’s document property bytes using a hex editor, which is a computer program that allows for manipulation of the fundamental binary data.


A blockchain connected motorbike: what Web 3.0 means for mobility and why you should care

We’ve been hearing about the potential of Web 3.0 for years – a decentralized web where information is distributed across nodes, making it more resistant to shutdowns and censorship. Specifically, its foundation lies in edge computing, artificial intelligence, and decentralized data networks. But what we haven’t talked enough about, is the massive impact Web 3.0 will have on mobility. Web 3.0 aims to build a new scalable economy where transactions are powered by blockchain technology, eschewing the need for a central intermediary or platform. And in the mobility space, there are lots of things happening. ... Pave Bikes connect to a private blockchain network. When you get your bike, you receive a non-fungible token (NFT). This is effectively a private key or token-based on ERC721. It is used to unlock the ebike via the Pave+ App. To be exact, the Pave mobile app is technically a dApp, a decentralized application connected to the blockchain. It enables riders to securely authenticate their proof of purchase and access their bike using Bluetooth, even without an internet connection.


Open banking will continue its exponential rise in the UK in 2022

Over the next year and beyond, it will be interesting to see how Variable Recurring Payments (VRPs) will continue to develop to allow businesses to connect to authorised payment providers to make payments on the customer’s behalf. Direct debits, which is the main mechanism in use today, are expensive, slow and have a painful, mainly paper-based process today. This is long overdue for digital transformation. I anticipate 2022 will be the year we begin to see VRPs in full effect. This will provide countless opportunities for consumers to find new ways to manage their finances. As VRPs progress, we will discover that they will do far more than simply paying bills and will unlock aspects of smart saving, one-click payments, and control over subscriptions. It will also be important to address issues that work against the great benefits of open banking in the near future. The 90-day reauthorisation rule, which requires open banking providers to re-confirm consent with the customer every 90 days, must be addressed. This rule currently undermines the principles of convenience and ease that open banking has been working on showcasing.


Major trends in online identity verification for 2022

As both consumer and investor demand for fintech startups continues to heat up, we expect to see even more neobanks and cryptocurrency investment platforms launching in the coming year. Unfortunately, bad actors are ready and they often target these nascent platforms, with the expectation that fraud prevention may be an afterthought at launch. But we expect that, as these startups go to market, these companies will shift their initial focus from purely optimizing for new user sign-ups to preventing fraud on their platforms, shifting from the required risk and compliance checks to more comprehensive anti-fraud solutions. Fortunately, there are ID verification solutions that can help with both, preventing fraud while still optimizing for sign-up conversions. Likewise, the tight hiring market for software developers will lead these new fintech firms to look for no-code or low-code ID verification and compliance solutions, rather than attempting to build them in-house.


AI-Based Software Testing: The Future of Test Automation

The success of digital technologies, and by their extension, businesses, is underpinned by the optimal performance of the software systems that form the core of operations in these enterprises. Many times, such enterprises make a trade-off between delivering a superior user experience and a faster time to market. As a consequence, the quality of the software systems often suffers from inadequacies, and enterprises cannot make much of their early ingress into the market. This results in the loss of revenue and brand value for such enterprises. The alternative is to go for comprehensive and rigorous software testing to find and fix bugs before the actual deployment. In fact, methodologies such as Agile and DevOps have given enterprises the means to achieve both: a superior user experience and a faster time to market. This is where AI-based automation comes into play and makes testing accurate, comprehensive, predictive, cost-effective, and quick. Artificial Intelligence, or AI, has become the buzzword for anything state-of-the-art or futuristic and is poised to make our lives more convenient.


Will Automation Fill Gaps Left by the ‘Great Resignation’?

From Lane’s perspective, the main areas DevOps teams should be looking to automate are continuous integration and continuous delivery (CI/CD), IaC and AIOps-enabled incident management platforms. “By taking the manual nature of day-to-day work off of DevOps engineers’ plates, they are freed to focus on digital transformation,” he said. “The number-one stumbling block is not starting with process.” Lane noted unless you understand all the steps in a procedure that you’re trying to automate, it is very difficult to maximize the power of automation tools. “Much of the process that is still adhered to today is outdated for the digital age,” he said. “Spend the time up front to map out what you hope to achieve with an automation project, what all the touchpoints are and how one can measure the quality of automation when it’s implemented.” Michaels added that while the internet is flooded by companies shouting they have the “best” tools, that proclamation of “best” is going to be determined by budget and known languages.



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg