Daily Tech Digest - October 30, 2021

Ransomware Attacks Are Evolving. Your Security Strategy Should, Too

Modern ransomware attacks typically include various tactics like social engineering, email phishing, malicious email links and exploiting vulnerabilities in unpatched software to infiltrate environments and deploy malware. What that means is that there are no days off from maintaining good cyber-hygiene. But there’s another challenge: As an organization’s defense strategies against common threats and attack methods improve, bad actors will adjust their approach to find new points of vulnerability. Thus, threat detection and response require real-time monitoring of various channels and networks, which can feel like a never-ending game of whack-a-mole. So how can organizations ensure they stay one step ahead, if they don’t know where the next attack will target? The only practical approach is for organizations to implement a layered security strategy that includes a balance between prevention, threat detection and remediation – starting with a zero-trust security strategy. Initiating zero-trust security requires both an operational framework and a set of key technologies designed for modern enterprises to better secure digital assets. 


Stateful Applications in Kubernetes: It Pays to Plan Ahead

Maybe you want to go with a pure cloud solution, like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) or Azure Kubernetes Service (AKS). Or perhaps you want to use your on-premises data center for solutions like RedHat’s OpenShift or Rancher. You’ll need to evaluate all the different components required to get your cluster up and running. For instance, you’ll likely have a preferred container network interface (CNI) plugin that meets your project’s requirements and drives your cluster’s networking. Once your clusters are operational and you’ve completed the development phase, you’ll begin testing your application. But now, your platform team is struggling to maintain your stateful application’s availability and reliability. As part of your stateful application, you’ve been using a database like Cassandra, MongoDB or MySQL. Every time a container is restarted, you begin to see errors in your database. You can prevent these errors with some manual intervention, but then you’re missing out on the native automation capabilities of Kubernetes.


Understanding Kubernetes Compliance and Security Frameworks

Compliance has become crucial for ensuring business continuity, preventing reputational damage and establishing the risk level for each application. Compliance frameworks aim to address security and privacy concerns through easy monitoring of controls, team-level accountability and vulnerability assessment—all of which present unique challenges in a K8s environment. To fully secure Kubernetes, a multi-pronged approach is needed: Clean code, full observability, preventing the exchange of information with untrusted services and digital signatures. One must also consider network, supply chain and CI/CD pipeline security, resource protection, architecture best practices, secrets management and protection, vulnerability scanning and container runtime protection. A compliance framework can help you systematically manage all this complexity. ... The Threat Matrix for Kubernetes, developed from the widely recognized MITRE ATT@CK (Adversarial Tactics, Techniques & Common Knowledge) Matrix, takes a different approach based on today’s leading cyberthreats and hacking techniques.


Authentication in Serverless Apps—What Are the Options?

In serverless applications, there are many components interacting—not only end users and applications but also cloud vendors and applications. This is why common authentication methods, such as single factor, two-factor and multifactor authentication offer only a bare minimum foundation. Serverless authentication requires a zero-trust mentality—no connection should be trusted, even communication between internal components of an application should be authenticated and validated. To properly secure serverless authentication, you also need to use authentication and authorization protocols, configure secure intraservice permissions and monitor and control incoming and outgoing access. ... A network is made accessible through a SaaS offering to external users. Access will be restricted, and every user will require the official credentials to achieve that access. However, this brings up the same problems raised above—the secrets must be stored somewhere. You cannot manage how your users access and store the credentials that you provide them with; therefore, you should assume that their credentials are not being kept securely and that they may be compromised at any point.


The economics behind adopting blockchain

If we take the insurance sector as a use case, we can see how blockchain mitigates various issues around information asymmetries. One fundamental concern in the insurance sector is the principal-agent problem, which stems from conflicting incentives amidst information asymmetry between the principal (insurer company) and the agent (of the company). Some adverse outcomes of this include unprofessional conduct, agents forging documents to meet assigned targets as well as a misrepresentation of the compliances, often leading to misselling of insurance products. These problems occur primarily due to the absence of an integrated mechanism to track and prevent fraudulent conduct of the agent. In such a scenario, blockchain has the ability to bridge the gaps and enhance the customer experience by virtue of providing a distributive, immutable and transparent rating system that allows agents to be rated according to their performance by companies as well as clients.


Techstinction - How Technology Use is Having a Severe Impact on our Climate

Like most large organisations, there is a general consciousness of the impact the Financial Services Industry is having on the environment. All three of these banks are taking serious measures to reduce their CO2 emissions and to change the behaviours of their staff. The Natwest group (who own RBS) for example recently published a working from home guide to their employees containing tips on how to save energy. Whilst this and all sustainability measures should be applauded, it’s important to acknowledge that "Sustainability in our work place" is very different and less important than "sustainability in our work", simply because there is more to be gained by optimising what we are doing as opposed to where we do it, both financially and for the environment. Sustainability in our work involves being lean in everything we do, including the hardware infrastructure being used, being completely digital in the services provided as well as how we produce software to deliver these services. All the major cloud providers invest heavily in providing energy efficient infrastructure as well using renewable energy sources.


How machine learning speeds up Power BI reports

Creating aggregations you don't end up using is a waste of time and money. "Creating thousands, tens of thousands, hundreds of thousands of aggregations will take hours to process, use huge amounts of CPU time that you're paying for as part of your licence and be very uneconomic to maintain," Netz warned. To help with that, Microsoft turned to some rather vintage database technology dating back to when SQL Server Analysis Service relied on multidimensional cubes, before the switch to in-memory columnar stores. Netz originally joined Microsoft when it acquired his company for its clever techniques around creating collections of data aggregations. "The whole multidimensional world was based on aggregates of data," he said. "We had this very smart way to accelerate queries by creating a collection of aggregates. If you know what the user queries are, [you can] find the best collection of aggregates that will be efficient, so that you don't need to create surplus aggregates that nobody's going to use or that are not needed because some other aggregates can answer [the query].


How GitOps Benefits from Security-as-Code

The emergence of security-as-code signifies how the days of security teams holding deployments up are waning. “Now we have security and app dev who are now in this kind of weird struggle — or I think historically had been — but bringing those two teams together and allowing flexibility, but not getting in the way of development is really to me where the GitOps and DevSecOps emerge. That’s kind of the big key for me,” Blake said. ... Developers today are deploying applications in an often highly distributed microservices environment. Security-as-code serves to both automate security for CI/CD with GitOps while also ensuring security processes are taking interconnectivity into account. “It’s sort of a realization that everything is so interconnected — and you can have security problems that can cause operational problems. If you think about code quality, one of your metrics for ‘this is good code’ doesn’t cause a security vulnerability,” Omier said. “So, I think a lot of these terms really come from acknowledging that you can’t look at individual pieces, when you’re thinking about how we are doing? ..."


The role of Artificial Intelligence in manufacturing

There are few key advantages which make the adoption of AI particularly suitable as launching pads for manufacturers to embark on their cognitive computing journey – intelligent maintenance, intelligent demand planning and forecasting, and product quality control. The deployment of AI is a complex process, as with many facets of digitisation, but it has not stopped companies from moving forward. The ability to grow and sustain the AI initiative over time, in a manner that generates increasing value for the enterprise, is likely to be crucial to achieving early success milestones on an AI adoption journey. Manufacturing companies are adopting AI and ML with such speed because by using these cognitive computing technologies, organisations can optimise their analytics capabilities, make better forecasts and decrease inventory costs. Improved analytics capabilities enable companies to switch to predictive maintenance, reducing maintenance costs and reducing downtime. The use of AI allows manufacturers to predict when or if functional equipment will fail so that maintenance and repairs can be scheduled in advance.


What the metaverse means for brand experiences

The metaverse is best described as a 3D World Wide Web or a digital facsimile of the physical world. In this realm, users can move about, converse with other users, make purchases, hold meetings, and engage in all manner of other activities. In the metaverse, all seats at live performances are front and center, sporting events are right behind home plate or center court, and of course, all avatars remain young and beautiful — if that’s what you desire — forever. As you might imagine, this is a marketer’s dream. Anheuser-Busch InBev global head of technology and innovation Lindsey McInerney explained to Built In recently that marketing is all about getting to where the people are, and a fully immersive environment is ripe with all manner of possibilities, from targeted marketing and advertising opportunities to fully virtualized brand experiences. Already, companies like ABB are experimenting with metaverse-type marketing opportunities, such as virtual horse racing featuring branded NFTs.



Quote for the day:

"Making those around you feel invisible is the opposite of leadership." -- Margaret Heffernan

Daily Tech Digest - October 29, 2021

How to become an entrepreneurial engineer and create your own career path

"To be a successful entrepreneurial engineer, you must wear two hats: one with a deep technical focus and the other focused on the goals of the business," said Loren Goodman, CTO and co-founder of InRule Technology. "This allows you to make decisions in real-time leveraging your understanding of diminishing returns on both fronts. The why, the what and the how are traditionally separated, and small changes to any part can have exaggerated effects on the others. You bring this thinking together—for example, knowing that a feature can be done in a fraction of the time if a small part was removed from scope and also knowing that that part is not core to the business need." Goodman stressed that entrepreneurial engineers must be curious about the bigger picture and be unafraid to take on challenging problems. They must also be success-focused, with a relentless passion for achieving the best solution to difficult problems, no matter how unrealistic things might seem. Finally, he said, a successful entrepreneurial engineer must be scrappy: "You are going to have to be comfortable working without all the necessary resources for a long time while still staying focused on your objectives."


Forensic Monitoring of Blockchains Is Key for Broader Industry Adoption

In the event that an adversary corrupts more than 1/3 of the master nodes in the BFT committee of any given epoch, it is then technically possible for said adversary to violate the safety and jeopardize the consensus by creating forks, resulting in two or more finalized blockchains. However, certain messages would need to be signed and sent by these nodes to make this happen, which can then be detected by the system immediately after a fork with a length of only one appears. The signed messages can then be used as irrefutable proof of the misbehavior. Those messages are embedded into the blockchain and can be obtained by querying master nodes for forked blockchains. This is what enables the forensic monitoring feature, which can identify as many Byzantine master nodes as possible, all while obtaining the proof from querying as few witnesses as possible. For example, two separate honest nodes, each having access to one of the two conflicting blockchains respectively, is sufficient for the proof.


Infrastructure-as-Code: 6 Best Practices for Securing Applications

Research from security platform provider Snyk reveals that many companies are only starting out on their IaC journey, with 63% just beginning to explore the technology and only 7% stating they’ve implemented IaC to meet current industry standards. And with this practice comes changes in responsibility: IaC further extends developers’ responsibility to include securing their code and infrastructure. Misconfigurations can easily introduce security risks if best practices are not followed. In fact, according to Gartner, “70% of attacks against containers will be from known vulnerabilities and misconfigurations that could have been remediated.” Often, security trails behind the usage of IaC, resulting in configuration issues that are only detected after applications are deployed. That doesn’t have to be the case. In fact, the best way to ensure every configuration is secure, while still benefiting from the speed and repeatability of IaC, is to build security testing for IaC into developers’ workflows, the same as other forms of code.


The shift from DevOps and security to DevSecOps: 5 key roadblocks

There is DevOps plus security, and then there’s DevSecOps. What’s the difference? In the first case, security is a third wheel. In the second, it’s the third leg of the stool—an integral part of the system that’s almost unnoticeable unless or until it disappears. Indeed, to be effective, security must be everywhere—throughout the pipeline used to build and deploy as well as the runtime environment. In the DevSecOps model, security is a shared responsibility for development, security and operations teams and throughout the entire IT lifecycle. However, many organizations are challenged to integrate, rather than just tack on, security measures. This is a huge issue when a company’s own security is at stake, but an increasing number of attacks on the software supply chain is leaving tens, hundreds, even thousands of organizations vulnerable. There are many granular recommendations for achieving DevSecOps. Here are the bigger-picture issues that your organization must address to move beyond security as an afterthought.


Agile Architecture - What Is It?

From the definition, the two very important terms emerge including, Emerging Design and Intentional Architecture. Emergent Design is the process of analyzing and extending the architecture just enough to implement and validate the next increment in the development cycle. Intentional Architecture is about seeing the big picture. Large corporations need to simultaneously respond to new business challenges with large-scale architectural initiatives. On large scale we can understand that to meet the business objective, multiple teams, products, and systems will be involved. In this case, Emergent Design is not enough as it is circumscribed in a single team. Without Intentional Architecture, we can have several problems such as difficulty integrating, validate and maintaining the fulfillment of non-functional system requirements, low reuse, redundancy of solutions, etc. The intentional architecture will give the teams a common objective/destination to be reached, allowing the alignment of efforts and the parallelization of the work of independent teams. In other words, it will be the guiding track, the glue between the teams' work.


NRA Reportedly Hit By Russia-Linked Ransomware Attack

The NRA did not immediately respond to Information Security Media Group's request for comment. But Andrew Arulanandam, managing director of public affairs for the NRA, took to Twitter to say: "NRA does not discuss matters relating to its physical or electronic security. However, the NRA takes extraordinary measures to protect information regarding its members, donors, and operations - and is vigilant in doing so." Allan Liska, a ransomware analyst at the cybersecurity firm Recorded Future, told NBC that Grief is "the same group" as Evil Corp. The news outlet verified that the information in the leaked files includes grant proposal forms, names of recent grant recipients, an email sent to a grant winner, a federal W-9 form and minutes from the organization's virtual meeting in September. Sam Curry, CSO of Cybereason, tells ISMG, "It's unlikely this is a strategic attack, but time will tell. The way it would be strategic is to further divide the left from the right in the U.S. … The most likely scenario is that it's motivated by greed, and it has the potential to inadvertently explode politically. The next move is in the NRA's hands."


Is the Indian SaaS Story Overhyped?

Experts watching the SaaS space opine that after Freshworks recent listing, global perception towards Indian SaaS companies has changed. Last month, Freshworks became the first Indian software maker to list on Nasdaq. “SaaS companies in India are gaining acceptance and attention from investors. Initially, investors were slow due to the nature of revenue which is a money sucker but as the customer base grew with a lower drop, the revenue started to look good. Things have changed a lot after Postman and Freshworks. Indian SaaS companies are now seriously looked at as potential unicorns,” said Anil Joshi, managing partner, Unicorn India Ventures. The SaaS ecosystem is relatively nascent in India and is led by players such as Freshworks, Capillary, Eka, etc., said Anurag Ramdasan, partner, 3one4 Capital. “While there are double-digit unicorns in Indian SaaS today, it’s still a very early ecosystem and we are seeing a lot of innovative SaaS in the seed to series A stage in India,” he said. Many companies that have become soonicorns and unicorns have great consumer stories and investors today look at India as a huge consumer story.


How do I select an SD-WAN solution for my business?

Network security is also gaining greater importance as cyber-security threats multiply, leading to cloud-based security techniques converging with SD-WAN in the SASE framework. But the transition to these technologies can be challenging, with significant support required from the SD-WAN partner. Therefore, enterprises need to evaluate SD-WAN providers based on three principal criteria. First, does the provider’s network reach align with the enterprise’s geographic locations and does the provider offer a Tier 1 IP backbone to realize the full performance advantages of SD-WAN? Second, does the provider offer a managed SD-WAN, including local internet or MPLS access, with end-to-end delivery, technical implementation support, and service assurance to help manage complexity? Third, does the provider have a clear SASE roadmap integral to its SD-WAN vision? This includes services like zero-trust network access (ZTNA) and cloud access security broker (CASB) for remote workers and cloud firewall and secure web gateway (SWG) to support the branch level.


The Rise of Event-Driven Architecture

In the REST framework, an API isn’t aware of the state of objects. The client queries the API to find out the state, and the role of the API is to respond to the client with the information. However, with an event-driven API, a client can subscribe to the API, effectively instructing it to monitor the state of objects and report back with real-time updates. Therefore, behavior shifts from stateless handling of repeatable, independent requests to stateful awareness of the virtual objects modeled on real-world operations. Event-driven APIs are a great way to meet the demands of modern end-users who expect customized and instantaneous access to information. Applying these APIs is easy to do in one-off, bespoke environments. However, things get more complicated when you need to offer this level of service at scale, and not every enterprise is ready to handle that level of complexity. To avoid amassing significant technical debt, organizations and developers should offload this complexity to a third party with the capabilities to synchronize digital experiences in real-time and at scale.


We Are Testing Software Incorrectly and It's Costly

The tests you write are tightly coupled to the underlying design of your code. Design is constantly evolving. You now not only have to refactor the designs of your production code — you have to change your tests, too! In other words, your tests should help you with the refactoring, giving confidence, but instead, it is only making you work harder and it's giving no confidence of things still working correctly. I will not even mention the mock hell for brevity (please Google about it). But instead of abandoning refactoring or unit tests, all you need to do is free yourself from the mistaken definition of "unit testing." Focus on testing behaviors! Instead of writing unit tests for every public method of every class, write unit tests for every component (i.e., user, product, order, etc.), covering every behavior of each component and focusing on the public interface of the unit. To achieve that, you will need to learn how to structure your code properly. Please don't package your code by technical concerns (controllers, services, repositories, etc.). Senior devs structure their code by domain.



Quote for the day:

"The world's greatest achievers have been those who have always stayed focussed on their goals and have been consistent in their efforts." -- Roopleen

Daily Tech Digest - October 28, 2021

Using Complex Networks to improve Machine Learning methods

Let’s start by defining what a complex network is: a collection of entities called nodes connected between themselves by edges that represent some kind of relationship. If you’re thinking: this is a graph! Well, you are correct, most complex networks can be considered a graph. However, complex networks usually scale up to thousands or millions of nodes and edges, which can make them pretty hard to analyze with standard graph algorithms. There is a lot of synergy between complex networks and the data science field because we have tools to try and understand how the network is built and what behavior we can expect from the entire system. Because of that, if you can model your data as a complex network, you have a new set of tools to apply to it. In fact, there are many machine learning algorithms that can be applied to complex networks and also algorithms that can leverage network information for prediction. Even though this intersection is relatively new, we can already play around with it a bit.


How to Find a Mentor and Get Started in Open Source

What separates open source from its proprietary counterpart is the open source community, made up of a mix of volunteers, super-fans and über-users of a product or suite of products. So while it’s reasonably overwhelming to think where to start, there’s the unique benefit of built-in communities to support you. It’s good to start with an idea of what you want to get out of your contribution — a job, a mentor, experience in a methodology, service, interest or coding language. Use the CNCF project landscape to search by your interest — monitoring, securing, or deploying, for example — or by organization or skillset. Next, think if you want to be part of one of the biggest, horizontal communities or if you’re feel more comfortable in a smaller niche. And then it’s about deciding what you want to put in to achieve that goal. For Mohan, contributing to open source projects gives her experience in a wider breadth of technologies outside of her job, including in Kubernetes and chaos engineering.


Securing a New World: Navigating Security in the Hybrid Work Era

Security doesn’t get any easier with some workers returning to the office, others staying home and quite a few doing a bit of both. That’s because the office, which was once the company’s security standard, is often full of devices that have been sitting idle since early last year. Security patches, which are issued all the time, are important to install at the point they’re published. But a computer that has been turned off for a year, unable to download patches, is a vulnerable device. And there may be dozens or even hundreds of patches waiting in the queue that are needed to bring a device up to par. There are, not surprisingly, a host of recommendations that experts have offered to help security teams in their work. Educating employees on the threats that people and companies face is one of their top suggestions. A survey from Proofpoint’s State of the Phish report emphasizes the need for a people-centric approach to cybersecurity protections and awareness training that accounts for changing conditions, like those constantly experienced throughout the pandemic. 


Now’s the time for more industries to adopt a culture of operational resilience

When you think about resiliency and doing work in operational models, it’s a verb-based system, right? How are you going to do it? How are you going to serve? How are you going to manage? How are you going to change, modify, and adjust to immediate recovery? All of those verbs are what make resiliency happen. What differentiates one business sector from another aren’t those verbs. Those are immutable. It’s the nouns that change from sector to sector. So, focusing on all the same verbs, that same perspective we looked at within financial services, is equally as integratable when you think about telecommunications or power. ... We’re seeing resiliency in the top five concerns for board-level folks. They need a solution that can scale up and down. You cannot take a science fair project and impact an industry nor provide value in the quick way these firms are looking for. The idea is to be able to try it out and experiment. And when they figure out exactly how to calibrate the solution for their culture and level of complexity, then they can rinse, repeat, and replicate to scale it out.


AWS's new quantum computing center aims to build a large-scale superconducting quantum computer

The launch of the AWS Center for Quantum Computing sees Amazon reiterating its ambition to take a leading role in the field of quantum computing, which is expected to one day unleash unprecedented amounts of compute power. Experts predict that quantum computers, when they are built to a large enough scale, will have the potential to solve problems that are impossible to run on classical computers, unlocking huge scientific and business opportunities in fields like materials science, transportation or manufacturing. There are several approaches to building quantum hardware, all relying on different methods to control and manipulate the building blocks of quantum computers, called qubits. AWS has announced that the company has chosen to focus its efforts on superconducting qubits -- the same method used by rival quantum teams at IBM and Google, among others. AWS reckons that superconducting processors have an edge on alternative approaches: "Superconducting qubits have several advantages, one of them being that they can leverage microfabrication techniques derived from the semiconductor industry," Nadia Carlsten tells ZDNet.


The causes of technical debt, and how to mitigate it

There is no single silver bullet that will fix technical debt. Instead, it needs to be addressed in a multi-faceted way. First, there needs to be a better cultural understanding across the entire business regarding precisely what it is. Importantly, stakeholders, including product owners, must also understand how their actions and decisions may be contributing. Going back to the credit card analogy, it helps if stakeholders can bear in mind that they could be dealing with 22% or higher annual interest. In such a case, the temptation to ‘spend’ beyond the team’s limits and live with minimum payments is less tempting. To pay off existing architectural and other types of technical debt, teams should compare their current minimum payments and the impact of those on overall velocity and team morale with the staggering expense of re-architecting part or all of a solution. Moving from a monolith to microservices is a good example. As mentioned, however, there is no one-size-fits-all solution. Long-term maintenance and ‘expenses’ need to be considered as well.


Why aren’t optical disks the top choice for archive storage?

Optical media is also designed with full backwards compatibility, meaning future BD-R and ODA drives will be able to read disks written in today’s drives. For example, you can read a CD-R disk written in 1991 in a current BD-R drive. In contrast, LTO-8 tape drives cannot read LTO-5 tape although they can read LTO-6 tapes. BD-R drives advertise a lifetime of 50 years and Sony advertises 100 years, both of which are longer than tape (30 years) and magnetic hard drives (five years). If you wanted a 50-year archive on LTO, you would be forced to migrate data at least once to avoid bit rot but not, as some optical marketing material suggests, every 10 years. Many people do this anyway to allow them to retire older tape drives and achieve greater storage density. There is also no current requirement to re-tension the tapes every so often. There is some debate about the bit error rate of optical versus tape, but that is a complex issue beyond the scope of this article.


How to develop a high-impact team

Innovation is increasingly becoming a team sport, requiring diverse perspectives and collective intelligence. These innovation-focused teams tend to be ephemeral. They form, collaborate, and disband quickly. Team members need to be able to step up and step back with equal ease. To participate in this fast, fluid model of leadership, less assertive employees (and those uninterested in careers in management) will likely need help stepping up. To get these reluctant leaders to step up and then step back, provide a path of retreat. Show them that being a designated leader can be a temporary assignment, existing for the duration of a project or even for just a single meeting. Some team members will need encouragement and support to become “step-up” leaders, but others will do so with ease. It can take work to then get them to step back and support others. You can help these people develop a more fluid leadership style by modeling healthy followership practices. Let them see you collaborating with a peer organization or contributing to a project led by someone below you in the management hierarchy.


Why automation progress stalls: 3 hidden culture challenges

“A general challenge with putting automation in place is that IT culture often focuses on heroic problem-solving rather than more mundane processes that prevent problems from happening in the first place,” says Red Hat technology evangelist Gordon Haff. “Automation has long been part of the picture – think system admins writing Bash scripts – but it’s also been reactive rather than proactive.” If your organization has treated automation mostly as a reactive problem-solver in the past, people may be less inclined to instinctively grasp its greater value. That’s where leaders have work to do in terms of communicating your big-picture plan and the role that automation – and everyone on the team – plays in it. This is also a mindset that must shift over time with experience and results: Automation should be as much (or more) about improvement and optimization as it is about dousing production fires or cutting costs. Ideally, automation should be boring, in the best possible sense of the word. “Modern automation practices, such as we often see in SRE roles, make automating systems and workflows part of the daily routine,” Haff says.


Regulation fatigue: A challenge to shift processes left

President Biden’s recent executive order asks government vendors to attest “to the extent practicable, to the integrity and provenance of open source software used within any portion of a product.” The president’s recent order, and the potential actions of legislators to follow, could lead to burdensome regulations that interfere with shift left practices, and ultimately slow down the pace of software development. The challenge with the directive is that nearly 60 percent of software developers have little to no secure coding training. Developers are traditionally focused on pushing out innovative, stable products, not triaging security alerts. They want to use open-source code without thinking about its possible security risks. Developers rely on open-source components because these are ready-made pieces of code that allow them to keep up with competitive release time frames. They often leave it to their security teams to identify mistakes at the end of the development process. Developers’ reliance on open-source components often presents a challenge to the cautious attitude of security teams. 



Quote for the day:

"Leaders, be mindful that there is a tendency to become arrogant. Such hubris blinds even the best intentions. Lead with humility." -- S Max Brown

Daily Tech Digest - October 27, 2021

Node.js makes fullstack programming easy with server-side JavaScript

Web application developers are inundated with options when it comes to choosing the languages, frameworks, libraries, and environments they will use to build their applications. Depending on which statistics you believe, the total number of available languages is somewhere between 700 and 9000. The most popular—for the past nine years according to the 2021 Stack Overflow Developer Survey—is JavaScript. Most people think of JavaScript as a front-end language. Originally launched in 2009, Node.js has quickly become one of the most widely used options among application developers. More than half of developers are now using Node.js—it is the most popular non-language, non-database development tool. It allows you to run JavaScript on the server side, which lets software engineers develop on the full web stack. Node.js’s popularity has snowballed for good reason. Node.js is a fast, low-cost, effective alternative to other back-end solutions. And with its two-way client-server communication channel, it is hard to beat for cross-platform development.


Your Data Plane Is Not a Commodity

If you are going to invest a ton of time, effort and engineering hours in a service mesh and a Kubernetes rollout, why would you want to buy the equivalent of cheap tires – in this case, a newer and minimally tested data plane written in a language that may not even have been designed to handle wire-speed application traffic? Because, truly, your data plane is where the rubber meets the road for your microservices. The data plane is what will directly influence customer perceptions of performance. The data plane is where problems will be visible. The data plane will feel scaling requirements first and most acutely. A slow-to-respond data plane will slow the entire Kubernetes engine down and affect system performance. Like tires, too, the date plane is relatively easy to swap out. You do not necessarily need major surgery to pick the one you think is best and mount them on your favorite service mesh and Kubernetes platform, but at what cost?


Why traditional IP networking is wrong for the cloud

Of course, the IP networking layer does provide a way to connect your data center to the cloud. However, one of the main challenges of legacy networking is that it provides limited visibility into applications in the cloud—the lifeblood of enterprises today and arguably the primary driver behind cloud adoption. At Layer 7, or the so-called application layer, enterprises have a holistic view of what takes place at that level (applications and collections of services) as well as in the stack below, such as at TCP and UDP ports and IP endpoints. By operating with the traditional stack (i.e, the IP layer) alone, enterprise teams have a substantially harder time viewing what is above them in the stack. They have a view of the network alone, and blind spots for everything else. Why does this matter? For one, it can significantly increase remediation time when performance problems occur. Indeed, enterprises need to understand how their cloud infrastructure works in relation to the application and A/B test configurations to align with application performance.


Defining the Developer Experience

Microservices architecture and cloud-native applications go hand in hand. Most organizations leverage a microservice architecture to decouple and achieve greater scale, as without it you have too many people changing the same code, causing velocity to slow as friction increases. Where in monolithic architecture, teams would be bumping into each other to merge, release, and deploy their changes to the monolith, in a microservices architecture, each team can clearly define the interfaces between their components, limiting the size and complexity of the codebase they are managing to that of a smaller, more agile team. Each team can move more quickly since they can focus on the components they own. Their level of friction and velocity can be that of just the group working on that component, not that of the larger development organization. ... But this creates its own problems as well, a key being the complexity of needing to ensure the cohesive whole also gets tested and functions together as a complete software product.


How we built a forever-free serverless SQL database

How can we afford to give this away? Well, certainly we’re hoping that some of you will build successful apps that “go big” and you’ll become paying customers. But beyond that, we’ve created an innovative Serverless architecture that allows us to securely host thousands of virtualized CockroachDB database clusters on a single underlying physical CockroachDB database cluster. This means that a tiny database with a few kilobytes of storage and a handful of requests costs us almost nothing to run, because it’s running on just a small slice of the physical hardware. ... Given that the SQL layer is so difficult to share, we decided to isolate that in per-tenant processes, along with the transactional and distribution components from the KV layer. Meanwhile, the KV replication and storage components continue to run on storage nodes that are shared across all tenants. By making this separation, we get “the best of both worlds” – the security and isolation of per-tenant SQL processes and the efficiency of shared storage nodes.


Why Outdated jQuery Is Still the Dominant JavaScript Library

Despite its enormous usage, developers today may not even be aware that they’re using jQuery. That’s because it’s embedded in a number of large projects — most notably, the WordPress platform. Many WordPress themes and plugins rely on jQuery. The jQuery library is also a foundational layer of some of today’s most popular JavaScript frameworks and toolkits, like AngularJS and Bootstrap (version 4.0 and below). “A lot of the surprise about jQuery usage stats comes from living in a bubble,” Gołębiowski-Owczarek told me. “Most websites are not complex Web apps needing a sophisticated framework, [they are] mostly static sites with some dynamic behaviors — often written using WordPress. jQuery is still very popular there; it works and it’s simple, so people don’t feel the need to stop using it.” jQuery will continue to be a part of WordPress for some time to come, if for no other reason that it would be difficult to remove it without breaking backward compatibility. 


How AI and AR are evolving in the workplace

Businesses are also using AR-based apps for tracking, identifying, and resolving technical issues as well as for tasks, such as retrofitting, assembling, manufacturing, and repairing production lines. The AI market is not only anticipated to help the development of enterprise, it is also believed that the technology can also help to achieve business growth objectives and generate value. Nine out of 10 C-suite executives believe they must leverage AI to achieve their growth objectives. ... The challenge of deploying evolving technologies, is always that until they have fully matured, integration can be a challenge. With smart glasses as well, there can also be security and privacy concerns. In medical and surgical settings for example, the use of cameras in operation rooms is very sensitive and controversial. For sensitive scenarios like these, the use of such devices must be agreed and understood to be for the benefit of all beforehand. While AI is a more developed technology, it is also costly, and may require a strong upfront investment.


Good security habits: Leveraging the science behind how humans develop habits

There is a secret recipe for good security habits that we’ve discovered from decades of research: it’s called the habit loop. And you can use the habit loop to hack your own brain for better security. You start with a prompt – which is just the signal that tells you to start a behavior. Then there’s the behavior itself. And finally, the most important step, giving yourself a reward. Even if the reward is just patting yourself on the back, your brain starts to release endorphins so when you see the prompt again next time, your brain will want to do that behavior again to receive another reward. Security can seem scary to some people while to others it might feel like it’s too much work. Using the habit loop can help make security feel easy, because we don’t have to think about habits: by definition they are what we do when we’re on autopilot. But since habits make up about 50% of everything we do in our lives, it’s also the best way to have a massive impact on our security.


More Tech Spending Moves Out of IT

Karamouzis says this is leading to a shift in how organizations buy technology. Enterprises had previously moved from buying products to buying solutions -- a combination of products and services. These products and solutions were purchased in a serial fashion. That doesn’t work anymore, says Karamouzis because now you must make four to 10 buying decisions concurrently to ensure different digital business initiatives lead to growth. This is part of a new way organizations buying; they are buying “outcomes,” she says. These changes have pushed organizations more to the public cloud, making enterprises and the entire global economy increasingly dependent on internet-delivered services. The most important of these services are provided directly by or running within hyperscale cloud services providers, says Gartner VP analyst Jay Heiser. “As everything becomes digital, virtually every aspect of society and the economy will have dependence upon the real-time functioning of a small number of public cloud services,” Heiser says.


Why Soul-Based Leadership Will Change the Nature of Remote and Hybrid Work

One of the most highly researched and evidence-based ways to invigorate executive function is through the ancient practice of mindfulness. Although it’s taken on a relatively "pop" aura relative to 2500 years ago, developing mindfulness is actually hard work! But the payoff is big in terms of making more informed decisions and leading with care. I often recommend one technique I learned from one of my teachers that I’ve personally modified a bit and called the Standing Ground Practice. You can be anywhere: sitting or standing at your desk or waiting on a corner to meet a friend. It’s ideal if you can go outside and stand facing a tree or something alive that’s naturally rooted in the earth, but it’s not necessary for the practice to be effective in this context. After finding your spot, bring your attention to the contact point between your feet and the ground or floor beneath you. Focus on that point and consider what it feels like. Thoughts about all kinds of things will most certainly interrupt. 



Quote for the day:

"Discipline is the bridge between goals and accomplishment." -- Jim Rohn

Daily Tech Digest - October 25, 2021

Why you should use a microservice architecture

Simply moving your application to a microservice-based architecture is not sufficient. It is still possible to have a microservice-based architecture, but have your development teams work on projects that span services and create complex interactions between your teams. Bottom line: You can still be in the development muck, even if you move to a microservice-based architecture. To avoid these problems, you must have a clean service ownership and responsibility model. Each and every service needs a single, clear, well-defined owner who is wholly responsible for the service, and work needs to be managed and delegated at a service level. I suggest a model such as the Single Team Oriented Service Architecture (STOSA). This model, which I talk about in my book Architecting for Scale, provides the clarity that allows your application—and your development teams—to scale to match your business needs. Microservice architectures do come at a cost. While individual services are easier to understand and manage, a microservices application as a whole has significantly more moving parts and becomes a more complex beast of its own.


Routine is a new productivity app that combines task management and notes

One of the most opinionated feature of Routine is the dashboard. Whatever you’re doing on your computer, you can pull up the Routine dashboard with a simple keyboard shortcut. By default, that shortcut is Ctrl-Space. The Routine app adds an overlay on top of your screen with a few widgets. It looks a bit like the now-defunct Dashboard on macOS. On that dashboard, you’ll find a handful of things. On the left, you can see the tasks you have to complete today. On the right, you can see how much time you have left before your next meeting and some information about that event. The date is pulled directly from your Google Calendar account. In the center of the screen, Routine displays a big input field called the Console. You can type text and then press enter to create a new task from there. It works a bit like the ‘Quick Add’ feature in Todoist. The idea is that you can add a task without wasting time opening your to-do app, moving to the right project, clicking the add task button and entering text into several fields. With Routine, you can press Ctrl-Space, type some text, press enter and you’re done.


3 Lessons I Learned From The Hard Way As A Data Scientist

Whatever algorithm you implement or analysis you make, the results are used in the continuing processes or production. Thus, it is of vital importance to make sure the results are correct. By results being correct, I do not mean not having any errors on your predictions or hitting 100% accuracy which is not reasonable or legitimate. In fact, you should be really suspicious of results which are too good to be true. The mistakes I mention are usually data related issues. For instance, you might be making a mistake while joining stock information of products from an SQL table to your main table. It results in serious problems if your solution is based on product stocks. There are almost always controls in your code that prevent making mistakes. However, it is not possible for us to think of each and every possible mistake. Thus, taking a second look is always beneficial. ... The glorious world of machine learning algorithms is very attractive. The urge for using a fancy algorithm and building a model to perform some predictions might cause you to skip digging into the data.


Research finds consumer-grade IoT devices showing up... on corporate networks

"Remote workers need to be aware that IoT devices could be compromised and used to move laterally to access their work devices if they're both using the same home router, which in turn could allow attackers to move onto corporate systems," said Palo Alto. Poor IoT device security stems mainly from manufacturers' desire to keep price points low, cutting security out as an unnecessary overhead. This approach inadvertently exposed large numbers of easily pwned devices to the wider internet – causing such a headache that governments around the world are now preparing to mandate better IoT security standards. Even IoT trade groups have woken up to the threat, albeit perhaps the threat of regulation rather than the security threat, but if that's what it takes, the outcome is no bad thing. ... Half of respondents said they worried about attacks against their industrial IoT devices, with 46 per cent being similarly worried about connected cameras being compromised. Smart cameras are a tried-and-trusted compromise method for miscreants


The Rise Of No-Code And Low-Code Solutions: Will Your CTO Become Obsolete?

There are many reasons behind the rise of no-code and low-code tools, but the key one is a large imbalance between the ever-growing demand for software development services and the shortage of skilled developers in the market. For decades, there's been movement toward a withdrawal from complicated coding in favor of easy-to-use visual tools. However, over time, no-code and low-code platforms have become more sophisticated, allowing non-developers to build more powerful websites and applications without hiring software specialists. That has even evoked some neo-Luddite concerns and discussions about the potential of such platforms to make good old software developers obsolete. But what’s behind it? Both no-code and low-code approaches hide the complexities of software programming under the mask of high-level abstractions. Low-coding reduces programming efforts down to minimum levels, and no-coding empowers anyone to create apps without any knowledge in programming.


Complex Systems: Microservices and Humans

There is one aspect to this that I think is worth talking about, and that is that we actually already have an organization of people. We work in organizations that are, in general, organized into teams. You see a theoretical org chart here on the left. This might look like something that you might see in your own companies. We have these org charts, and these organizations of teams. Then that org chart doesn't map very neatly onto the microservices architecture necessarily, and maybe it shouldn't. The interrelationships between these teams are actually more subtle and often more complicated than what you see in the org chart. That is because if you have microservices, and you have dependencies between these microservices and interactions between them, then the teams owning them, by necessity, sometimes need to interact with each other. Microservices are constructed in a way that gives as much independence as possible and as much autonomy as possible to the individual teams. 


Maximizing agile productivity to meet shareholder commitments

Companies’ public commitments to ambitious—and sometimes expansive—goals tend to have multiyear timelines, while agile teams are trained to focus on the next three to six months. In organizations with siloed processes, product owners often feel that they don’t have enough visibility into their organizations’ processes to forecast the timeline for their initiatives, let alone to predict the long-term impact of their work. To balance the demands of the near future with longer-term goals, the companies that meet their transformation goals support agile teams with information and expertise. Successful companies provide product owners with relevant financial and operational data for the company, benchmarked to best-in-class organizations, to help them assess the potential value of their work for the next 18–24 months. They also assign initiative owners and relevant subject-matter experts from business functions early in the research and discovery process to help quantify possible improvements to the existing journey.


Satellite IoT dreams are crashing into reality

Even with smaller satellites, building a profitable wireless network is hard. On one side, there’s a capital-intensive phase that requires establishing connectivity (in this case, by building and launching satellites) and on the other, these companies must establish a market for the connectivity. But while the economics of building and launching satellites have changed dramatically, the demand for devices that rely on satellite networks hasn’t kept up. The biggest growth has come from people-tracking products, such as the Garmin inReach walkie-talkies, which people can wear into the wilderness and use to get help if needed. There are also rumors that Apple may include some form of satellite service in an upcoming iPhone. While this is a real and growing market, however, it isn’t enough to justify the launch of constellations by almost a dozen companies whose goal is to be IoT connectivity providers. So former connectivity players eschew bandwidth and turn to full solutions in order to provide a service that isn’t a commodity and eke out more revenue per customer.


Interesting Application Garbage Collection Patterns

When an application is caching many objects in memory, ‘GC’ events wouldn’t be able to drop the heap usage all the way to the bottom of the graph (like you saw in the earlier ‘Healthy saw-tooth pattern). ... you can notice that heap usage keeps growing. When it reaches around ~60GB, the GC event (depicted as a small green square in the graph) gets triggered. However, these GC events aren’t able to drop the heap usage below ~38GB. Please refer to the dotted black arrow line in the graph. In contrast, in the earlier ‘Healthy saw-tooth pattern’, you can see that heap usage dropping all the way to the bottom ~200MB. When you see this sort of pattern (i.e., heap usage not dropping till all the way to the bottom), it indicates that the application is caching a lot of objects in memory. When you see this sort of pattern, you may want to investigate your application’s heap using heap dump analysis tools like yCrash, HeapHero, Eclipse MAT and figure out whether you need to cache these many objects in memory. Several times, you might uncover unnecessary objects to be cached in the memory. Here is the real-world GC log analysis report, which depicts this ‘Heavy caching’ pattern.


Designing the Internet of Things: role for enterprise architects, IoT architects, or both?

Great use cases, but an architectural nightmare that calls for a new role to plan and piece it all together into a coherent and viable system. This may be someone in a relatively new role, an IoT architect, or expanding the current roles of enterprise architects. The need for architects of either stripe was recently explored in a Gartner eBook, which looked at the ingredients needed to ensure success with enterprise IoT. ... Those having such capabilities in two or more of these areas will be in extremely high demand. The good news is that organizations can use existing digital business efforts to train up candidates." Responsibilities for the IoT architect role include the following: "Engaging and collaborating with stakeholders to establish an IoT vision and define clear business objectives."; "Designing an edge-to-enterprise IoT architecture."; "Establishing processes for constructing and operating IoT solutions."; and "Working with the organization's architecture and technical teams to deliver value." Then there's the enterprise architect -- who are likely to see their roles greatly expanded to encompass the extended architectures the IoT is bringing. 



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - October 24, 2021

Artificial Intelligence Is Smart, but It Doesn’t Play Well With Others

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges — like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning. A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical “reward” by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI aren’t programmed to follow “if/then” statements, because the possible outcomes of the human tasks they’re slated to tackle, like driving a car, are far too many to code. “Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won’t necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data” Allen says. “The sky’s the limit in what it could, in theory, do.”


CDR: The secret cybersecurity ingredient used by defense and intelligence agencies

Employees in the defense and intelligence sector are in near-constant contact with each other, sharing information often under challenging circumstances. They move files and documents from low trust environments into networks that hold a nation’s most sensitive data, where a data breach could have a serious impact on national security. Consequently, when it comes to sharing any kind of document, these teams cannot risk threats slipping through the net. Human attackers are now using machines to engineer malware at a pace only imaginable a few years ago. Today, it’s possible to engineer a new piece of malware and to make each version of that file suitably different so that it’s almost impossible for traditional malware protection solutions to identify. In the same way that Facebook or Twitter use algorithms to create a truly unique social feed of information that is tailored to the interests and tastes of a user, bad actors can use similar algorithms to deploy essentially the same underlying threats but packaged in ways that simply evade detection.

Gartner advises tech leaders to prepare for action as quantum computing spreads

Cambridge Quantum’s efforts to expand quantum infrastructure got significant backing earlier this year when Honeywell said it would merge its own quantum computing operations with Cambridge Quantum, to form an independent company to pursue cybersecurity, drug discovery, optimization, material science, and other applications, including AI. Honeywell said it would invest between $270 million – $300 million in the new operation. Cambridge Quantum said it would remain independent, working with various quantum computing players, including IBM. The lambeq work is part of an overall AI project that is the longest-term project among the efforts at Cambridge Quantum, said Ilyas Khan, founder, and CEO of Cambridge Quantum, in an e-mail interview. “We might be pleasantly surprised in terms of timelines, but we believe that NLP is right at the heart of AI more generally and therefore something that will really come to the fore as quantum computers scale,” he said. Khan cited cybersecurity and quantum chemistry as the most advanced application areas in Cambridge Quantum’s estimation.


How to Not Lose Your Job to Low-Code Software

The amount of work you have is driven by the ability of software to make a meaningful difference in your organization. Take a look at your current queue of work. If your team is like most IT teams there will be a mountain of unmet demand for new applications or additional functionality for existing applications. Thinking that any amount of automation will reduce that demand to zero is like thinking that a faster car will get you to Mars. If low code software starts taking some of your work, there will likely be other projects you can work on. If you handle this right, you can even shuffle some of the painful projects over to the party-goers on the low code bus. ... Secondly, and more fundamentally, there are certain aspects of software engineering that are harder to automate than others - making it unsuitable terrain for the low code party bus to drive across. For example, low code tools make it easy for non-developers to create a table to store data. But they can't do much to help the non-developer structure their tables to best map to the business problem they are trying to solve. 


API contract testing with Joi

When you sign a contract, you expect both parties to hold their end of the bargain. The same can be true for testing applications. Contract testing is a way to make sure that services can communicate with each other and that the data shared between the services is consistent with a specified set of rules. In this post, I will guide you through using Joi as a library to create API contracts for services consuming an API. ... Before we get started, let me give you some background about contract testing. This kind of testing provides confidence that different services work when they are required to. Imagine that an organization has multiple payment services that utilize an Authentication API. The API logs in users into an application with a username and a password. It then assigns them an access token when the log-in operation is successful. Other services like Loans and Repayments require the Authentication API service once users are logged in. ... Contract tests are designed to monitor the state of an application and notify testers when there is an unexpected result. Contract tests are most effective when they are used by a tool that relies on the stability of other services. 


Regulating Crypto: Is It Different – Or Is It the Same?

Regulators need to know what the technology is capable of, but they need not know every technical detail just to make good law. “If you can understand clearly what the technology is doing, I think that you can make pretty good judgments about what the fundamental financial activity is and what regulatory box that financial activity can or should fit in,” he told Webster. Strip those technologies down a bit, and they boil down to some basic underpinning concepts that lend themselves to governance. At the core of blockchain and cryptos is database architecture, said Gerety. “It has some neat properties, but nowhere else in the financial services industry do you get regulated differently if you use SAP or Oracle,” he said. To get a sense of how one might approach “newness” in a sector, he offered a concept of a matrix, with axes denoting what the future “feels like” and might actually “be.” Babies will pretty much always “be” and “feel” the same. Not much in the way of technology will change the experience or feelings one will have with birthing and raising a child, despite the newness of, well, becoming a parent.


Information Theory: Principles and Apostasy

Let’s start with a data science interview question. Usually, as part of an initial screening round for entry level candidate I like to find an example on their CV of a project that used real life data. Real life data is much nastier than academic and research data. Its chalked full of missing data, mixed (integer and string) data and outliers that make consuming and modeling the information grossly more difficult. Invariably most of the conversation revolves around these real world considerations. How do you handle missing data? Usual answers involve some sort of information replacement strategy like replace them with the average value of the column. Fair and reasonable. How do we deal with malformed or mixed data? Again usually a fair answer involving mapping strings to numbers. Finally what did you do about the large outlier events? Usually the answer is that they ‘removed them’ because you ‘can’t be expected to predict rare events.’ The ultimate justification: it improved the models accuracy. That’s good answer if building a forecast is a game or contest, much worse if you want to use it.


The OCC Officially Recognizes the Critical and Permanent Role of Blockchain in Banking

This is noteworthy for a couple of reasons. First, it is a recognition that many banks, along with a slew of other financial institutions, are adopting DLT as a technology enabling better processes. Simply put, financial institutions are moving past the exploratory phase of DLT and are now actually implementing the technology into their operations. Secondly, the OCC is declaring its intent to explore and define appropriate governance processes for banks to deploy when such changes are implemented. In other words, the OCC is defining its intent to regulate how such changes should take place. ... The immutability of a distributed ledger provides a new level of security. It is challenging to establish a single customer view across different jurisdictions and business lines. With mutualized data management, DLT allows permitted parties to share data securely and in real time, which could address challenges of Know Your Customer (KYC) and Anti Money Laundering (AML). The themes are clear – DLT injected into the banking and financial ecosystem is an equalizer, a simplifier and a fortifier.


How data drives Air Canada’s cargo business

For business intelligence, the airline has been a long-term user of WebFocus from Tibco. It also uses Microsoft PowerBI. Riboulet’s reason for using two BI platforms is because “they complement each other”, each having different functions it finds useful. For example, WebFocus offers Air Canada the ability to push out reports via email, a feature not available in PowerBI. Riboulet says this is useful for people working in operations, who may only have access to their phone and need to see embedded reports. Also, the data team noticed that many business users require similar datasets and attributes, which can be pulled together into pre-built reports. The company also uses the data grid feature in WebFocus to aggregate data in a way that can easily be customised by users and can be exported to Microsoft Excel. It has also deployed WebFocus Hyperstage, as a staging area for data, to avoid direct access to its on-premise database systems. Riboulet views the data team at Air Canada Cargo as internal consultants who discuss data requirements with businesspeople. 


How Much Power Should Finance Have Over Their Automations?

If you want to automate your finance function and bring lower costs to operate the financing and accounting needs, taking control can provide you with numerous benefits. This includes prioritization of your processes that align with your strategic vision, controlling resource investments and commitments, and insuring SOX control frameworks are adhered to at the onset. It’s not surprising that some finance organizations can feel underserved by their IT partners, as ITs responsible for supporting the whole organization and finance operations can take a back seat to other priorities. This does not mean that IT should be left aside. IT will have a role, even if you run your own automation program end-to-end, and you will need them to have a seat at the table. You will want to avoid creating a shadow IT group and truly focus your financial resources on process improvement and automation. It’s best practice to leverage your IT team for infrastructure, network security, understanding ERP/system schedules, roadmaps, and disaster recovery processes (at a minimum). It is also recommended to adopt the cloud version of the tools, which can significantly reduce the needs of your IT org



Quote for the day:

"Problem-solving leaders have one thing in common: a faith that there's always a better way." -- Gerald M. Weinberg