Daily Tech Digest - May 19, 2022

Five areas where EA matters more than ever

While resiliency has always been a focus of EA, “the focus now is on proactive resiliency” to better anticipate future risks, says Barnett. He recommends expanding EA to map not only a business’ technology assets but all its processes that rely on vendors as well as part-time and contract workers who may become unavailable due to pandemics, sanctions, natural disasters, or other disruptions. Businesses are also looking to use EA to anticipate problems and plan for capabilities such as workload balancing or on-demand computing to respond to surges in demand or system outages, Barnett says. That requires enterprise architects to work more closely with risk management and security staff to understand dependencies among the components in the architecture to better understand the likelihood and severity of disruptions and formulate plans to cope with them. EA can help, for example, by describing which cloud providers share the same network connections, or which shippers rely on the same ports to ensure that a “backup” provider won’t suffer the same outage as a primary provider, he says.


Build or Buy? Developer Productivity vs. Flexibility

To make things a bit more concrete, let’s look at a very simple example that shows the positives of both sides. Developers are the primary audience for InfluxData’s InfluxDB, a time series database. It provides both client libraries and direct access to the database via API to give developers an option that works best for their use case. The client libraries provide best practices out of the box so developers can get started reading and writing data quickly. Things like batching requests, retrying failed requests and handling asynchronous requests are taken care of so the developer doesn’t have to think about them. Using the client libraries makes sense for developers looking to test InfluxDB or to quickly integrate it with their application for storing time series data. On the other hand, developers who need more flexibility and control can choose to interact directly with InfluxDB’s API. Some companies have lengthy processes for adding external dependencies or already have existing internal libraries for handling communication between services, so the client libraries aren’t an option.


Enterprises shore up supply chain resilience with data

“Digital dialogue between trading partners is crucial, not just for those two [direct trading partners], but also for the downstream effects,” he says, adding that when it comes to supply chains and procurement, SAP’s focus is on helping its customers ensure that the data “flows to the right trading partners so that they can make proactive decisions in moving assets, logistics and doing the right purchasing”. He further adds that where supply chain considerations have traditionally been built around “cost, control and compliance”, companies are now looking to incorporate “connectivity, conscience and convenience” alongside those other factors. On the last point regarding convenience, Henrik says this refers to having “information at my fingertips when I need it”, meaning it is important for companies to not only collect data on their operations, but to structure it in a way that drives actionable insights. “Once you have actionable insights from the data, then real change happens, and that’s really what companies are looking for,” he says.


Ransomware is already out of control. AI-powered ransomware could be 'terrifying.'

If attackers were able to automate ransomware using AI and machine learning, that would allow them to go after an even wider range of targets, according to Driver. That could include smaller organizations, or even individuals. "It's not worth their effort if it takes them hours and hours to do it manually. But if they can automate it, absolutely," Driver said. Ultimately, “it's terrifying.” The prediction that AI is coming to cybercrime in a big way is not brand new, but it still has yet to manifest, Hyppönen said. Most likely, that's because the ability to compete with deep-pocketed enterprise tech vendors to bring in the necessary talent has always been a constraint in the past. The huge success of the ransomware gangs in 2021, predominantly Russia-affiliated groups, would appear to have changed that, according to Hyppönen. Chainalysis reports it tracked ransomware payments totaling $602 million in 2021, led by Conti's $182 million. The ransomware group that struck the Colonial Pipeline, DarkSide, earned $82 million last year, and three other groups brought in more than $30 million in that single year, according to Chainalysis.


Will quantum computing ever be available off-the-shelf?

Quantum computing will never exist in a vacuum, and to add value, quantum computing components need to be seamlessly integrated with the rest of the enterprise technology stack. This includes HPC clusters, ETL processes, data warehouses, S3 buckets, security policies, etc. Data will need to be processed by classical computers both before and after it runs through the quantum algorithms. This infrastructure is important: any speedup from quantum computing can easily be offset by mundane problems like disorganized data warehousing and sub-optimal ETL processes. Expecting a quantum algorithm to deliver an advantage with a shoddy classical infrastructure around it is like expecting a flight to save you time when you don’t have a car to take you to and from the airport. These same infrastructure issues often arise in many present-day machine learning (ML) use cases. There may be many off-the-shelf tools available, but any useful ML application will ultimately be unique to the model’s objective and the data used to train it. 


Addressing the skills shortage with an assertive approach to cybersecurity

All too often, businesses do not see investing in security strategy and technologies as a priority – until an attack occurs. It might be the assumption that only the wealthiest industries or those with highly classified information would require the most up-to-date cybersecurity tactics and technology, but this is simply not the case. All organizations need to adopt a proactive approach to security, rather than having to deal with the aftermath of an incident. By doing so, companies and organizations can significantly mitigate any potential damage. Traditionally, security awareness may have been restricted to specific roles, meaning only a select few people having the training and understanding required to deal with cyber-attacks. Nowadays every role, at every level, in all industries must have some knowledge to secure themselves and their work against breaches. Training should be made available for all employees to increase their awareness, and organizations need to prioritize investment in secure, up-to-date technologies to ensure their protection. 


Easily Optimize Deep Learning with 8-Bit Quantization

There are two challenges with quantization: How to do it easily - In the past, it has been a time consuming process; and How to maintain accuracy. Both of these challenges are addressed by the Neural Network Compression Framework (NNCF). NNCF is a suite of advanced algorithms for optimizing machine learning and deep learning models for inference in the Intel® Distribution of OpenVINOTM toolkit. NNCF works with models from PyTorch and TensorFlow. One of the main features of NNCF is 8-bit uniform quantization, using recent academic research to create accurate and fast models. The technique we will be covering in this article is called quantization-aware training (QAT). This method simulates the quantization of weights and activations while the model is being trained, so that operations in the model can be treated as 8-bit operations at inference time. Fine tuning is used to restore the accuracy drop from quantization. QAT has better accuracy and reliability than carrying out quantization after the model has been trained. Unlike other optimization tools, NNCF does not require users to change the model manually or learn how the quantization works.


Apache Druid: A Real-Time Database for Modern Analytics

With its distributed and elastic architecture, Apache Druid prefetches data from a shared data layer into an infinite cluster of data servers. Because there’s no need to move data and you’re providing more flexibility to scale, this kind of architecture performs quicker as opposed to a decoupled query engine such as a cloud data warehouse. Additionally, Apache Druid can process more queries per core by leveraging automatic, multilevel indexing that is built into its data format. This includes a global index, data dictionary and bitmap index, which goes beyond a standard OLAP columnar format and provides faster data crunching by maximizing CPU cycles. ... Apache Druid provides a smarter and more economical choice because of its optimized storage and query engine that decreases CPU usage. “Optimized” is the keyword here; you want your infrastructure to serve more queries in the same amount of time rather than having your database read data it doesn’t need to.


Compete to Communicate on Cybersecurity

At its core, cybersecurity depends on communication. Outdated security policies that are poorly communicated are equally as dangerous as substandard software code and other flawed technical features. Changing human behavior in digital security falls on the technology companies themselves, which need to improve explaining digital security issues to their employees and customers. In turn, tech companies can help employees and customers understand what they can do to make things better and why they need to be active participants in helping to defend themselves, our shared data and digital infrastructure. Instead of competing on the lowest price or claims of best service, how do we incentivize service vendors, cloud providers, device manufacturers and other relevant technology firms to pay more attention to how they communicate with users around security? Rules and regulations? Possibly. Improving how companies communicate and train on security? Absolutely. Shaping a marketplace where tech companies compete more intensively for business on the technical and training elements of security? Definitely.


A philosopher's guide to messy transformations

In the domain of expertise, people base their understanding of transformation on practical insight into the history and culture of the company. A question from an attendee on the panel I conducted illustrated this nicely: “How do you get an organization with a legacy of being extremely risk averse to embrace agility, which can be perceived as a more risky, trial-and-error approach?” The question acknowledges and accepts that the company needs to embrace agility but demonstrates neither insight nor interest as to why it needs to do so. Whether the questioner trusts senior management’s decision to embrace agility, or she has other reasons for ignoring the “why,” it is obvious that she wants to know about the “how.” Too often leaders forget about the how. And that can be a costly mistake. ... “When you have an organization that has been organically growing over 90 years, then the culture is embedded in the language and the behaviors of the people working in the organization,” he said. The strength of legacy companies is that their culture is defined by conversations and behaviors that have been evolving for decades. 



Quote for the day:

"The great leaders are like best conductors. They reach beyond the notes to reach the magic in the players." -- Blaine Lee

Daily Tech Digest - May 18, 2022

Google Cloud launches services to bolster open-source security, simplify zero-trust rollouts

On the zero-trust front, Google is introducing BeyondCorp Enterprise Essentials, which is designed to help enterprise customers begin to deploy zero-trust environments. The new solution brings context-aware access controls for SaaS applications or any other apps connected via Security Assertions Markup Language (SAML), which is an XML-based protocol that supports real-time authentication and authorization across federated Web services environments. It also includes threat and data protection capabilities, such as data loss prevention, malware and phishing protection, and URL filtering, integrated in the Chrome browser, according to Potti. “It’s a simple and effective way to protect your workforce, particularly an extended workforce or users who leverage a ‘bring your own device’ model,” Potti stated. “Admins can also use Chrome dashboards to get visibility into unsafe user activity across unmanaged devices.” BeyondCorp Enterprise includes an app and client connector that can simplify connections to apps running on other clouds such as Azure or AWS without the need to open firewalls or set up site-to-site VPN connections, Potti stated.


Deployment of Low-Latency Solutions in the Cloud

Cloud-native environments offer a common platform and interfaces to ease definition and deployment of complex application architectures. This infrastructure enables the use of mature off-the-shelf components to solve common problems such as leader election, service discovery, observability, health-checks, self-healing, scaling, and configuration management. Typically the pattern has been to run containers atop of virtual machines in these environments; however, now all the main cloud providers offer bare-metal (or near bare-metal) solutions, so even latency-sensitive workloads can be hosted in the cloud. This is the first iteration of a demonstration of how Chronicle products can be used in these architectures and includes solutions to some of the challenges encountered by our clients in cloud and other environments. By leveraging common infrastructure solutions, we can marry the strengths of Chronicle products with the convenience of modern production environments to provide simple low-latency, operationally robust systems.


FBI and NSA say: Stop doing these 10 things that let the hackers in

The joint alert recommends MFA is enforced for everyone, especially since RDP is commonly used to deploy ransomware. "Do not exclude any user, particularly administrators, from an MFA requirement," CISA notes. Incorrectly applied privileges or permissions and errors in access control lists can prevent the enforcement of access control rules and could give unauthorized users or system processes access to objects. Of course, make sure software is up to date. But also don't use vendor-supplied default configurations or default usernames and passwords. These might be 'user friendly' and help the vendor deliver faster troubleshooting, but they're often publicly available 'secrets'. The NSA strongly urges admins to remove vendor-supplied defaults in its network infrastructure security guidance. ... "These default credentials are not secure – they may be physically labeled on the device or even readily available on the internet. Leaving these credentials unchanged creates opportunities for malicious activity, including gaining unauthorized access to information and installing malicious software."


The rise of servant leadership

Though the style originated in the 1970s, servant leadership has gained momentum today as the Great Resignation reveals the pandemic’s mental toll on workers and employees leave their jobs in droves in search of more meaningful work. The pressure to attract and retain talent has never been greater, and companies are moving away from command-and-control style leadership in favor of more purpose-driven management, says David Dotlich, president and senior client partner at Korn Ferry. “We’re seeing this as a big trend across all industries,” Dotlich says. More than half of Korn Ferry’s clients now view purpose as the center of their leadership, he says. “They’re signing up for help” to answer those questions of who do we serve, how do we help, how do we make a difference, how do we change the world, and they’re receiving individual training and tools. ... Servant leaders know how to build trust, provide the tools and support that employees need to grow, remove obstacles, listen more and talk less, and let employees create their own path for success. It can backfire though if employees aren’t dedicated to the team’s core mission.


Four ways to combat the cybersecurity skills gap

Some businesses attempt to narrow the gap by retraining their IT professionals. While there is a chance that some employees with technical skills may be able and willing to take on cybersecurity positions, they still need to have someone to teach them. Most cybersecurity experts today are self-taught and there is very little that an organization can do to help because the availability of security certifications is also limited. However, the real problem is that organizations often perceive cybersecurity as something that only the dedicated cybersecurity workforce should deal with. This perception is the cause of several problems mentioned above, for example, the high level of stress and burnout for cybersecurity staff. Security teams often work alone and the rest of the organization is not aware, not educated, and worst of all: does not feel responsible for security. ... The cybersecurity industry is still a bit behind the trends and a lot of tools are still created with dedicated security specialists in mind. Such tools are difficult or even impossible to use in complex environments, 


Why You Should Care About Software Architecture

Broadly speaking, achieving “sustainability” is the focus on architectural work in software products. A software product can be considered sustainable if it is capable of meeting its current requirements, including QARs, without jeopardizing its ability to meet future requirements. As we stated in the previous section, quality attribute requirements drive the architecture, and meeting key QARs is essential to create sustainable architectural designs. Unfortunately, software systems “wear out” over time, as functional enhancements are being implemented, and new design decisions are made, which may stretch or even break the original architectural design. ... How do you know when your software system is wearing out, the same way you know when your car tires are wearing out and need to be replaced? Just as a physician may use many different kinds of tools to assess the health of an individual, different tools help a team assess software architecture fitness. Older systems may be difficult to understand because, as we mentioned earlier, their design decisions and assumptions are often not documented, and documentation, when it exists, is likely to be outdated.


Open-source standard aims to unify incompatible cloud identity systems

In a press release, Strata Identity stated that current popular cloud platforms use proprietary identity systems with individual policy languages, all of which are incompatible with each other. What’s more, each application must be hard-coded to work with a specific identity system, it added. Hexa has been designed to use IDQL to enable any number of identity systems to work together as a unified whole, without making changes to them or to applications, Strata Identity said. It works by abstracting identity and access policies from cloud platforms, authorization systems, data resources, and zero trust networks to discover what policies exist, then translates them from their native syntax into the generic, IDQL declarative policy, the vendor continued. It then orchestrates identity and access instructions across cloud systems and throughout apps, data resources, platforms, and networks by translating back into native, imperative policies of target systems via a cloud-based architecture.


Vulnerabilities found in Bluetooth Low Energy gives hackers access to numerous devices

This issue is believed to be something that can’t be easily patched over or just an error in Bluetooth specification. This exploit could affect millions of people, as BLE-based proximity authentication was not originally designed for use in critical systems such as locking mechanisms in smart locks, according to NCC Group. “What makes this powerful is not only that we can convince a Bluetooth device that we are near it—even from hundreds of miles away—but that we can do it even when the vendor has taken defensive mitigations like encryption and latency bounding to theoretically protect these communications from attackers at a distance,” said Sultan Qasim Khan, Principal Security Consultant and Researcher at NCC Group. “All it takes is 10 seconds—and these exploits can be repeated endlessly.” To start, the cybersecurity company points out that any product relying on a trusted BLE connection is vulnerable to attacks from anywhere in the world at any given time.


Augmented reality will give us superpowers

Over the next ten years, augmented reality will replace the mobile phone as our primary interface for digital content. Early adopters will embrace the lure of new, magical capabilities. Everyone else, skeptics included, will quickly find themselves at a disadvantage without omniscience, x-ray vision, superhuman recall, and dozens of other capabilities that are not even on the drawing board yet. This will drive adoption as quickly as the transition from flip phones to smartphones. After all, not upgrading your hardware will mean missing out on layers of useful information that everyone else can see. An augmented world is coming — one with the potential to be magical, embellished with artistic content and infused with superhuman abilities. At the same time, there are risks we must avoid, as augmented reality will give tech platforms unprecedented ability to track our activities and mediate our experiences. For these reasons, we need to push for a safe and regulated metaverse, especially the augmented metaverse. It will impact all of our lives in the very near future.


What’s new with ML.NET Automated ML (AutoML) and tooling

Training machine learning models is a time-consuming and iterative task. Automated Machine Learning (AutoML) automates that process by making it easier to find the best algorithm for your scenario and dataset. AutoML is the backend that powers the training experiences in Model Builder and the ML.NET CLI. Last year we announced updates to the AutoML implementation in our Model Builder and ML.NET CLI tools based Neural Network Intelligence (NNI) and Fast and Lightweight AutoML (FLAML) technologies from Microsoft Research. These updates provided a few benefits and improvements over the previous solution which include:Increase in the number of models explored. ... Until recently, you could only take advantage of these AutoML improvements inside of our tools. We’re excited to announce that we’ve integrated the NNI / FLAML implementations of AutoML into the ML.NET framework so you can use them from a code-first experience. To get started today with the AutoML API install the latest pre-release version of the Microsoft.ML and Microsoft.ML.Auto NuGet packages using the ML.NET daily feed.



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - May 17, 2022

Only DevSecOps can save the metaverse

We’ve previously talked about “shifting left,” or DevSecOps, the practice of making security a “first-class citizen” when it comes to software development, baking it in from the start rather than bolting it on in runtime. Log4j, SolarWinds, and other high-profile software supply chain attacks only underscore the importance and urgency of shifting left. The next “big one” is inevitably around the corner. A more optimistic view is that far from highlighting the failings of today’s development security, the metaverse might be yet another reckoning for DevSecOps, accelerating the adoption of automated tools and better security coordination. If so, that would be a huge blessing to make up for all the hard work. As we continue to watch the rise of the metaverse, we believe supply chain security should take center stage and organizations will rally to democratize security testing and scanning, implement software bill of materials (SBOM) requirements, and increasingly leverage DevSecOps solutions to create a full chain of custody for software releases to keep the metaverse running smoothly and securely.


EU Parliament, Council Agree on Cybersecurity Risk Framework

"The revised directive aims to remove divergences in cybersecurity requirements and in implementation of cybersecurity measures in different member states. To achieve this, it sets out minimum rules for a regulatory framework and lays down mechanisms for effective cooperation among relevant authorities in each member state. It updates the list of sectors and activities subject to cybersecurity obligations, and provides for remedies and sanctions to ensure enforcement," according to the Council of the EU. The directive will also establish the European Union Cyber Crises Liaison Organization Network, EU-CyCLONe, which will support the coordinated management of large-scale cybersecurity incidents. The European Commission says that the latest framework is set up to counter Europe's increased exposure to cyberthreats. The NIS2 directive will also cover more sectors that are critical for the economy and society, including providers of public electronic communications services, digital services, waste water and waste management, manufacturing of critical products, postal and courier services and public administration, both at a central and regional level.


Catalysing Cultural Entrepreneurship in India

What constitutes CCIs varies across countries depending on their diverse cultural resources, know-how, and socio-economic contexts. A commonly accepted understanding of CCIs comes from the United Nations Educational, Scientific and Cultural Organization (UNESCO), which defines this sector as “activities whose principal purpose is production or reproduction, promotion, distribution or commercialisation of goods, services, and activities of a cultural, artistic, or heritage-related nature.”, CCIs play an important role in a country’s economy: they offer recreation and well-being, while spurring innovation and economic development at the same time. First, a flourishing cultural economy is a driver of economic growth as attaching commercial value to cultural products, services, and experiences leads to revenue generation. These cultural goods and ideas are also contributors to international trade. Second, although a large workforce in this space is informally organised and often unaccounted for in official labour force statistics, cultural economies are some of the biggest employers of artists, craftspeople, and technicians. 


Rethinking Server-Timing As A Critical Monitoring Tool

Server-Timing is uniquely powerful, because it is the only HTTP Response header that supports setting free-form values for a specific resource and makes them accessible from a JavaScript Browser API separate from the Request/Response references themselves. This allows resource requests, including the HTML document itself, to be enriched with data during its lifecycle, and that information can be inspected for measuring the attributes of that resource! The only other header that’s close to this capability is the HTTP Set-Cookie / Cookie headers. Unlike Cookie headers, Server-Timing is only on the response for a specific resource where Cookies are sent on requests and responses for all resources after they’re set and unexpired. Having this data bound to a single resource response is preferable, as it prevents ephemeral data about all responses from becoming ambiguous and contributes to a growing collection of cookies sent for remaining resources during a page load.


Scalability and elasticity: What you need to take your business to the cloud

At a high level, there are two types of architectures: monolithic and distributed. Monolithic (or layered, modular monolith, pipeline, and microkernel) architectures are not natively built for efficient scalability and elasticity — all the modules are contained within the main body of the application and, as a result, the entire application is deployed as a single whole. There are three types of distributed architectures: event-driven, microservices and space-based. ... For application scaling, adding more instances of the application with load-balancing ends up scaling out the other two portals as well as the patient portal, even though the business doesn’t need that. Most monolithic applications use a monolithic database — one of the most expensive cloud resources. Cloud costs grow exponentially with scale, and this arrangement is expensive, especially regarding maintenance time for development and operations engineers. Another aspect that makes monolithic architectures unsuitable for supporting elasticity and scalability is the mean-time-to-startup (MTTS) — the time a new instance of the application takes to start. 


Proof of Stake and our next experiments in web3

Proof of Stake is a next-generation consensus protocol to secure blockchains. Unlike Proof of Work that relies on miners racing each other with increasingly complex cryptography to mine a block, Proof of Stake secures new transactions to the network through self-interest. Validator's nodes (people who verify new blocks for the chain) are required to put a significant asset up as collateral in a smart contract to prove that they will act in good faith. For instance, for Ethereum that is 32 ETH. Validator nodes that follow the network's rules earn rewards; validators that violate the rules will have portions of their stake taken away. Anyone can operate a validator node as long as they meet the stake requirement. This is key. Proof of Stake networks require lots and lots of validators nodes to validate and attest to new transactions. The more participants there are in the network, the harder it is for bad actors to launch a 51% attack to compromise the security of the blockchain. To add new blocks to the Ethereum chain, once it shifts to Proof of Stake, validators are chosen at random to create new blocks (validate).


Is NLP innovating faster than other domains of AI

There have been several stages in the evolution of the natural language processing field. It started in the 80s with the expert system, moving on to the statistical revolution, to finally the neural revolution. Speaking of the neural revolution, it was enabled by the combination of deep neural architectures, specialised hardware, and a large amount of data. That said, the revolution in the NLP domain was much slower than other fields like computer vision, which benefitted greatly from the emergence of large scale pre-trained models, which, in turn, were enabled by large datasets like ImageNet. Pretrained ImageNet models helped in achieving state-of-the-art results in tasks like object detection, human pose estimation, semantic segmentation, and video recognition. They enabled the application of computer vision to domains where the number of training examples is small, and annotation is expensive. One of the most definitive inventions in recent times was the Transformers. Developed at Google Brains in 2017, Transformers is a novel neural network architecture and is based on the concept of the self-attention mechanism. The model outperformed both recurrent and convolutional models. 

Before you get too excited about Power Query in Excel Online, though, remember one important difference between it and a Power BI report or a paginated report. In a Power BI report or a paginated report, when a user views a report, nothing they do – slicing, dicing, filtering etc – affects or is visible to any other users. With Power Query and Excel Online however you’re always working with a single copy of a document, so when one user refreshes a Power Query query and loads data into a workbook that change affects everyone. As a result, the kind of parameterised reports I show in my SQLBits presentation that work well in desktop Excel (because everyone can have their own copy of a workbook) could never work well in the browser, although I suppose Excel Online’s Sheet View feature offers a partial solution. Of course not all reports need this kind of interactivity and this does make collaboration and commenting on a report much easier; and when you’re collaborating on a report the Show Changes feature makes it easy to see who changed what.


Observability Powered by SQL: Understand Your Systems Like Never Before With OpenTelemetry Traces and PostgreSQL

Given that observability is an analytics problem, it is surprising that the current state of the art in observability tools has turned its back on the most common standard for data analysis broadly used across organizations: SQL. Good old SQL could bring some key advantages: it’s surprisingly powerful, with the ability to perform complex data analysis and support joins; it’s widely known, which reduces the barrier to adoption since almost every developer has used relational databases at some point in their career; it is well-structured and can support metrics, traces, logs, and other types of data (like business data) to remove silos and support correlation; and finally, visualization tools widely support it. ... You're probably thinking that observability data is time-series data that relational databases struggle with once you reach a particular scale. Luckily, PostgreSQL is highly flexible and allows you to extend and improve its capabilities for specific use cases. TimescaleDB builds on that flexibility to add time-series superpowers to the database and scale to millions of data points per second and petabytes of data.


Why cyber security can’t just say “no“

Ultimately, IT security is all about keeping the company safe from damages — financial damages, operational damages, reputational and brand damages. You’re trying to prevent a situation that not only will harm the company’s well-being, but also that of its employees. That is why we need to explain the actual threats and how incidents occur. Explain what steps can be taken to lower the chances and impact of those incidents occurring and show them how they can be part of that. People love learning new things, especially if it has something to do with their daily work. Explain the tradeoffs that are being made, at least in high-level terms. Explain how quickly convenience, such as running a machine as an administrator, can lead to abuse. Not only will the companies appreciate you for your honesty, but they will have the right answer the next time the question comes up. They’ll think along the constraints and find new ways of adding value to the business, while removing factors from their daily work that might result in one less incident down the line.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell

Daily Tech Digest - May 16, 2022

OAuth Security in a Cloud Native World

As you integrate OAuth into your applications and APIs, you will realize that the authorization server you have chosen is a critical part of your architecture that enables solutions for your security use cases. Using up-to-date security standards will keep your applications aligned with security best practices. Many of these standards map to company use cases, some of which are essential in certain industry sectors. APIs must validate JWT access tokens on every request and authorize them based on scopes and claims. This is a mechanism that scales to arbitrarily complex business rules and spans across multiple APIs in your cluster. Similarly, you must be able to implement best practices for web and mobile apps and use multiple authentication factors. The OAuth framework provides you with building blocks rather than an out-of-the-box solution. Extensibility is thus essential for your APIs to deal with identity data correctly. One critical area is the ability to add custom claims from your business data to access tokens. Another is the ability to link accounts reliably so that your APIs never duplicate users if they authenticate in a new way, such as when using a WebAuthn key.


APIs Outside, Events Inside

It goes without saying that external clients of an application calling the same API version — the same endpoint — with the same input parameters expect to see the same response payload over time. The need of end users for such certainty is once again understandable but stands in stark contrast to the requirements of the DA itself. In order for distributed applications to evolve and grow at the speed required in today’s world, those autonomous development teams assigned to each constituent component need to be able to publish often-changing, forward-and-backward-compatible payloads as a single event to the same fixed endpoints using a technique I call "version-stacking." ... A key concern of architects when exposing their applications to external clients via APIs is — quite rightly — security. Those APIs allow external users to affect changes within the application itself, so they must be rigorously protected, requiring many and frequent authorization steps. These security steps have obvious implications for performance, but regardless, they do seem necessary.

 

More money for open source security won’t work

The best guarantor of open source security has always been the open source development process. Even with OpenSSF’s excellent plan, this remains true. The plan, for example, promises to “conduct third-party code reviews of up to 200 of the most critical components.” That’s great! But guess what makes something a “critical component”? That’s right—a security breach that roils the industry. Ditto “establishing a risk assessment dashboard for the top open source components.” If we were good at deciding in advance which open source components are the top ones, we’d have fewer security vulnerabilities because we’d find ways to fund them so that the developers involved could better care for their own security. Of course, often the developers responsible for “top open source components” don’t want a full-time job securing their software. It varies greatly between projects, but the developers involved tend to have very different motivations for their involvement. No one-size-fits-all approach to funding open source development works ...


Prepare for What You Wish For: More CISOs on Boards

Recently, the Security Exchange Commission (SEC) made a welcome move for cybersecurity professionals. In proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting, the SEC outlined requirements for public companies to report any board member’s cybersecurity expertise. The change reflects a growing belief that disclosure of cybersecurity expertise on boards is important as potential investors consider investment opportunities and shareholders elect directors. In other words, the SEC is encouraging U.S. public companies to beef up cybersecurity expertise in the boardroom. Cybersecurity is a business issue, particularly now as the attack surface continues to expand due to digital transformation and remote work, and cyber criminals and nation-state actors capitalize on events, planned or unplanned, for financial gain or to wreak havoc. The world in which public companies operate has changed, yet the makeup of boards doesn’t reflect that.


12 steps to building a top-notch vulnerability management program

With a comprehensive asset inventory in place, Salesforce SVP of information security William MacMillan advocates taking the next step and developing an “obsessive focus on visibility” by “understanding the interconnectedness of your environment, where the data flows and the integrations.” “Even if you’re not mature yet in your journey to be programmatic, start with the visibility piece,” he says. “The most powerful dollar you can spend in cybersecurity is to understand your environment, to know all your things. To me that’s the foundation of your house, and you want to build on that strong foundation.” ... To have a true vulnerability management program, multiple experts say organizations must make someone responsible and accountable for its work and ultimately its successes and failures. “It has to be a named position, someone with a leadership job but separate from the CISO because the CISO doesn’t have the time for tracking KPIs and managing teams,” says Frank Kim, founder of ThinkSec, a security consulting and CISO advisory firm, and a SANS Fellow.


The limits and risks of backup as ransomware protection

One option is to use so-called “immutable” backups. These are backups that, once written, cannot be changed. Backup and recovery suppliers are building immutable backups into their technology, often targeting it specifically as a way to counter ransomware. The most common method for creating immutable backups is through snapshots. In some respects, a snapshot is always immutable. However, suppliers are taking additional measures to prevent these backups being targeted by ransomware. Typically, this is by ensuring the backup can only be written to, mounted or erased by the software that created it. Some suppliers go further, such as requiring two people to use a PIN to authorise overwriting a backup. The issue with snapshots is the volume of data they create, and the fact that those snapshots are often written to tier one storage, for reasons of rapidity and to lessen disruption. This makes snapshots expensive, especially if organisations need to keep days, or even weeks, of backups as a protection against ransomware. “The issue with snapshot recovery is it will create a lot of additional data,” says Databarracks’ Mote.


Four ways towards automation project management success

Having a fundamental understanding of the relationship between problem and outcome is essential for automation success. Process mining is one of the best options a business has to expedite this process. Leyla Delic, former CIDO at Coca Cola İçecek, eloquently describes process mining as a “CT scan of your processes”, taking stock and ensuring that the automation that you want to implement is actually problem-solving for the business. With process mining one should expect to need to go in and try blindly at first, learn what works, and only then expand and scale for real outcomes. A recent Forrester report found that 61% of executive decision-makers either are, or are looking at, using process mining to simplify their operations. Constructing a detailed, end-to-end understanding of processes provides the necessary basis to move from siloed, specific task automation to more holistic process automation – making a tangible impact. With the most advanced tools available today, one can even understand in real-time the actual activities and processes of knowledge workers across teams and tools, and receive automatic recommendations on how to improve work.


The Power of Decision Intelligence: Strategies for Success

While chief information officers and chief data officers are the traditional stakeholders and purchase decision makers, Kohl notes that he’s seeing increased collaboration between IT and other business management areas when it comes to defining analytics requirements. “Increasingly, line-of-business executives are advocating for analytics platforms that enable data-driven decision making,” he says. With an intelligent decisioning strategy, organizations can also use customer data -- preferably in real time -- to understand exactly where they are on their journeys -- be it an offer for a more tailored new service, or outreach with help if they’re behind on a payment. Don Schuerman, CTO of Pega, says this helps ensure that every interaction is helpful and empathetic, versus just a blind email sent without any context. In the same way that a good intelligence integration strategy can benefit customers, the ability to analyze employee data and understand roadblocks in their workflows helps solve for these problems faster and create better processes, resulting in happier, more productive employees.


Digital exhaustion: Redefining work-life balance

As workers continue to create and collaborate in digital spaces, one of the best things we can do as leaders is to let go. Let go of preconceived schedules, of always knowing what someone is working on, of dictating when and how a project should be accomplished – in effect, let go of micromanagement. Instead, focus on hiring productive, competent workers and trust them to do their jobs. Don’t manage tasks – gauge results. Use benchmarks and deadlines to assess effectiveness and success. This will make workers feel more empowered and trusted. Such “human-centric” design, as Gartner explains, emphasizes flexible work schedules, intentional collaboration, and empathy-based management to create a sustainable environment for hybrid work. According to Gartner’s evaluation, a human-centric approach to work stimulates a 28 percent rise in overall employee performance and a 44% decrease in employee fatigue. The data supports the importance of recognizing and reducing the impacts of digital exhaustion.


Late-Stage Startups Feel the Squeeze on Funding, Valuations

Investors are now tracking not only a prospect's burn rate but also their burn multiple, which Sekhar says measures how much cash a startup is spending relative to the amount of ARR it is adding each year. As a result, he says, deals that last year took two days to get done are this year taking two weeks since investors are engaging in far more due diligence to ensure they're betting on a quality asset. "We've seen this in the past where companies spend irresponsibly and just run off a cliff expecting that they'll raise yet another round," Sekhar says. "I think we're going back to basics and focusing on building great businesses." Midstage and late-stage security startups have begun examining how many months of capital they have and whether they should slow hiring to buy more time to prove their value, Scheinman says. Startups want to extend how long they can operate before they have to approach investors for more money, given all the uncertainty in the market, he says. As a result, Scheinman says, venture-backed firms have cut back on hiring and technology purchases and placed greater emphasis on hitting their sales numbers. 



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - May 15, 2022

Compliance Is A Crucial Part Of Digital Transformation—Here’s How To Achieve It

Staying legally compliant should be a priority for any small or medium-sized business looking to remain up and running. For entrepreneurs or those just entering the business environment, learning and understanding compliance may seem daunting. ... Legal compliance must be a top priority, and hiring the right legal counsel can provide your business with vital information to ensure compliance. Stay updated on state and federal regulations According to the U.S. Small Business Administration (SBA), there are a few key areas of compliance businesses should be aware of, including: Internal requirements; Ongoing state filing requirements; Licenses, permits, and recertifications; and Ongoing federal filing requirements. The internet is chock-full of information regarding SMB compliance. It’s also a good idea to consider consulting professional services to help with compliance management. Human resources (HR) professionals are typically well versed in compliance, so use them as resources, too. ... The final tip to remain compliant as an SMB is to use a centralized location for all company communications. Using one platform for all communications makes interactions more efficient and less confusing for employees. 


The Most Important Cybersecurity Step to Implement This Year

In our experience, passwords are prone to user error and difficult to regulate properly. Even complex passwords can be easily bypassed, especially if they’ve been part of a prior security breach. The point is, if a bad actor wants to get into your network, they will target your users’ passwords first -- and very often, they’ll succeed. ... MFA completely changes the password game. Instead of a simple string of text, MFA also requires an additional proof of identity to gain access to an account. Some examples include a PIN sent to your phone, a fingerprint scan, or a mobile authentication app. MFA makes most forms of login credential attacks exponentially harder. In many cases, there’s a 99 percent improvement in your team’s security ... all by adding just a single additional click! There’s really no good reason to ignore MFA. Passwords are so exposed -- and so crucial to identity access management -- that MFA is now a must-have. In fact, MFA is now required by both cyber-insurance providers and multiple compliance standards for government, medical, and manufacturing work. Unless a business employs MFA, renewing cyber-insurance coverage or getting new coverage is often next to impossible these days. It used to be a nice bonus, but now it’s a minimum requirement.


Enterprise Architecture Is A Foundational Skill For The Engineering Students

Nowadays, engineering graduates and post-graduates usually attain a cursory knowledge of Information Technologies and Information Systems during their curriculum as the majority of the educational programs followed in universities are not in conjunction with Business Informatics, which is an integral requirement for today’s digital organizations. There is a demand for professionals who possess in-depth knowledge in both technical and business spheres. They are required to not only manage the development of products efficiently, but also understand the business context and work to improve the business function by aligning IT with business drivers. This is why the Enterprise Architect’s role is increasing in importance to the business and provides an anchor in a sea of change. Before we move on, let’s do due diligence and get to know what Enterprise Architecture is. It is the process by which organizations standardize and organize IT infrastructure to align with the business goals. These strategies support digital transformation, IT growth, and the modernization of IT as a department.


How new API tools are transforming API management

APIs are taking over the world, revolutionizing the way your enterprise organizes IT, and giving you new ways to reach and secure lots of customers. They are powering supply chains and are re-shaping the value chain. According to a recent Nordic APIs statistics roundup, over 90% of developers are using APIs and they spend nearly 30% of their time coding them. This clearly illustrates how important APIs have become for businesses, but also how much impact they have on the workload of IT professionals. In the wake of the massive growth of API adoption there has been a surge in both launches and funding of API-centric start-ups. Many focus on innovating business services like communication services, payment processing, anti-fraud services, banking services etc. Others offer technical capabilities that zoom in on the needs of API providers and consumers - the developers, which begs the question how these tools complement full lifecycle API management solutions like webMethods API Management. Full lifecycle API management supports all stages of an API's lifecycle, from planning and design through implementation and testing to deployment and operation. It is a cornerstone of your digital business capabilities. 


Four use cases defining the new wave of data management

As the public becomes more aware of how AI is used within organizations, greater scrutiny is being placed upon models. Any semblance of bias – particularly as it relates to race, gender or socioeconomic status – has the potential to erase years of goodwill. Yet, even beyond public optics and moral imperatives, being able to trust AI implementations and easily explain why models arrived at certain results in better business decisions. The data fabric helps enable MLOps and Trustworthy AI by establishing trust in data, trust in models and trust in processes. Trust in data is created with the help of many capabilities noted earlier that deliver high-quality data that’s ready for self-service consumption by those who should have access. Trust in models relies upon MLOps-automated data science tools with built in transparency and accountability at each stage of the model lifecycle. Finally, trust in processes through AI governance delivers consistent repeatable processes that assist not only with model transparency and traceability but also time-to-production and scalability.


Data Quality Metrics: Importance and Utilization

Metrics and KPIs (key performance indicators) are often confused. Key performance indicators are a way of measuring performance over a period of time, while working toward a specific goal. KPIs supply target goals for teams, and milestones to measure progress. Metrics, on the other hand, uses dimensions to measure the quality of data. It is, unfortunately, easy to use the terms interchangeably, but they are not the same thing. Key performance indicators can help developing an organization’s strategy and focus. Metrics is more of a “business as usual” measurement system. A KPI is one kind of metric. ... Business organizations struggle to adapt to the flood of new technologies and data processing techniques. The ability to not only adjust to changing circumstances, but to eclectically embrace the best of those technologies and techniques, can lead to long-term improvements, help to minimize work stress, and increase profits. Using high-quality data for decision-making can be the difference between success and failure. The key goals of a business are to become more profitable and successful, and high data quality can help to achieve those goals.


An In-Depth Guide on the Types of Blockchain Nodes

Full nodes are responsible for maintaining the entire transaction records in a blockchain network. They are regarded as the blockchain’s servers where the data is stored and maintained. There are several governance models of a blockchain that full nodes can come under. If there are any improvements to be made to a blockchain, a majority of full nodes must be ready for it. So, it can be concluded that full nodes are given voting power in order to make any changes in a blockchain. However, certain scenarios can also arise when a change is not implemented even after the majority of full nodes agree to the change. It can happen when a big decision has to be made. ... Pruned nodes are given a specific memory capacity to store data. This means that any number of blocks can be added, but a full node can store only a limited number of bocks. To maintain the ledger, pruned nodes can keep on downloading the block till it reaches the specified limit. Once the limit is attained, the node starts deleting the oldest blocks and making space for new ones in order to maintain the blockchain’s size. 


Crypto-assets and Decentralized Finance: A Primer

There also are a range of other activities—mostly occurring off the blockchain—that are linked to this simplified DeFi structure. These include asset management, automated trading bots, supply of data that are required inputs into conditional smart contracts, and blockchain governance arrangements (such as votes taken to determine the evolving structure of the blockchain). (In the language of DeFi, the suppliers of external data such as asset prices are known as “oracles.”) There also a range of other off-chain providers—including exchanges and app developers—who combine many of these activities to facilitate retail and wholesale access to the DeFi system. To understand the mechanics of DeFi, it is useful to think of a smart contract as a vending machine. After someone identifies the quantity and type of the items they wish, and provides payment, the machine dispenses the desired objects. Indeed, this type of protocol is quite common even in TradFi. For example, crediting accounts with interest payments on a regular schedule requires that the bank’s operations receive signals on the interest rate and the date.


5 reference architecture designs for edge computing

Latency can be a major problem for applications that depend upon real-time access to data. Edge computing, which places computing near the user's or data source's physical location, is a way to deliver services faster and more reliably while gaining flexibility from hybrid cloud computing. This speed is vital in industries such as healthcare, utilities, telecom, and manufacturing. There are three categories of edge use cases: The first is called enterprise edge, and it allows customers to extend application services to remote locations. It has a core enterprise data store located in a datacenter or as a cloud resource. The second is operations edge, which focuses on analyzing inputs in real time (from Internet of Things sensors, for example) to provide immediate decisions that result in actions. For performance reasons, this generally happens onsite. This kind of edge is a place to gather, process, and act on data. The third category is provider edge, which manages a network for others, as in the case of telecommunications service providers. This type of edge focuses on creating a reliable, low-latency network with computing environments close to mobile and fixed users.


Mapping the Future Part 4: Technical Roadmaps

There is a give and take that must be accounted for to align the technical execution with the business planning. This is what makes the technical roadmap so important: It takes the ideas and validates the feasability of them. This give and take is dependent on two constraints: budget and available resources. Budget planning can be difficult. There is always a need to control costs, but at the same time, you need to invest in the future. This is where the strategy and capability roadmap are important. They provide a lens through which the budget decision-making can be performed. The budget limits what can be done. What capabilities are most important to implement? What technologies are truly required to support the capabilities? What is the return on investment? This latter question can be difficult to answer. Traditionally ROI has been analyzed on a per-project basis. But when we are talking about technologies and capabilities, an individual project may cross capabilities, or a capability may require several dependent projects before the ROI is realized.



Quote for the day:

"If a leader loves you, he makes sure you build your house on rock." -- Ugandan Proverb

Daily Tech Digest - May 14, 2022

Non-Cloud Native Companies: How the Developer Experience Can Make Digital Transformation Easier

To force the cultural change, Infrastructure and DevOps teams might be trying their best to serve the developer, but walking a mile in someone else’s shoes isn’t easy even with the best of intent. Consider cross-pollinating the teams, rotating a few individuals every so often, as the permanent state. That way, those creating the developer experience will have to experience it themselves, which tends to blow up any feeling of pride in one’s creation. In the opposite direction, the application developer gets to explain the problems inside the DevOps team in a much more effective way than in a series of meetings. Above all, the tactic helps the overall culture of collaboration in a more effective way than I’ve seen result from any insistence by management that “we’re one team”. Furthermore, application developers crave understanding what they are trying to accomplish and problem solving in light of it. A happy developer is one who works directly with business people who define the goals, use creativity to solve them, and experience the results. An unhappy developer is one who builds something dictated without understanding why, and never finds out if it worked.


Present and Future of the Microservice Architecture

Ultimately, the advantage of microservices is that it decouples development, it reduces developmental coupling so that teams can make progress more independently of one another. Otherwise, it's just a service oriented architecture. It's not microservices. That decoupling is important. One of the things that I like in most definitions of microservices is that people say they should be aligned with a bounded context. That makes sense to me. I was chatting with Eric Evans about this a couple of weeks ago, and he came up with an idea that resonated with me, which is that the messaging layer is a separate bounded context. I think multiple separate bounded contexts. You have the bounds of the service, and then the messaging is something else. The protocol of exchanging information between the services is another abstraction. One of the things that resonates with me, another thing from Eric's book, is that you always translate when you're crossing bounded context. We should be translating the messages as they go across. Then that makes the example that Holly came up with an easier problem to deal with, where we have these ideas that are sometimes the same and sometimes different and sometimes related.


Threat Actors Use Telegram to Spread ‘Eternity’ Malware-as-a-Service

Eternity—which researchers discovered on a TOR website, where the malware-as-a-service also is for sale—demonstrates the “significant increase in cybercrime through Telegram channels and cybercrime forums,” researchers wrote in the post. This is likely because threat actors can sell their products without any regulation, they said. Each module is sold individually and has different functionality that researchers suspect is being repurposed from code in an existing Github repository, which project developers are then modifying and selling under a new name, according to Cyble. “Our analysis also indicated that the Jester Stealer could also be rebranded from this particular Github project which indicates some links between the two threat actors,” they wrote. ... Threat actors are selling the Eternity Worm, a virus that spreads through infected machines via files and networks, for $390. Features of the worm include its ability to spread through the following: USB Drives, local network shares, various local files, cloud drives such as GoogleDrive or DropBox, and others. It also can send worm-infected messages to people’s Discord and Telegram channels and friends, researchers said.


Digital transformation on the CEO agenda

There are three rules of thumb that seem to be evolving. First is that companies that get the most value from this actually spend a lot of effort thinking about, “What are the new digital businesses to launch? How can we create new value with new products and new customers versus transforming the existing business processes?” There’s sort of a duality—you should spend as much focus on new digital business building as you do on transforming the current business. Rule of thumb number two is, you’ve got to focus on things that are big enough. And maybe that’s obvious, but it sometimes surprises us how many people will call something a digital transformation, and you add up the total economic impact, and it’s less than, say, 15 or 20 percent of the company’s overall EBITDA. If you’re not targeting at least 15 or 20 percent, in our mind it’s hard to call that a transformation and to sustain the level of organizational focus around it. And then the third rule of thumb is, it’s best to start with a concentration in a particular area rather than sprinkle a little bit of digital or a handful of analytics use cases broadly across the organization. 


Intro to Micronaut: A cloud-native Java framework

Micronaut delivers a slew of benefits gleaned from older frameworks like Spring and Grails. It is billed as "natively cloud native," meaning that it was built from the ground up for cloud environments. Its cloud-native capabilities include environment detection, service discovery, and distributed tracing. Micronaut also delivers a new inversion-of-control (IoC) container, which uses ahead-of-time (AoT) compilation for faster startup. AoT compilation means the startup time doesn't increase with the size of the codebase. That's especially crucial for serverless and container-based deployments, where nodes are often shut down and spun up in response to demand. Micronaut is a polyglot JVM framework, currently supporting Java, Groovy, and Kotlin, with Scala support underway.  ... One cloud-native concept that Micronaut supports is the federation. The idea of a federation is that several smaller applications share the same settings and can be deployed in tandem. If that sounds an awful lot like a microservices architecture, you are right. The purpose is to make microservice development simpler and keep it manageable. See Micronaut's documentation for more about federated services.


4 Best Practices for Microservices Authorization

In the past, most authorization decisions have happened at the gateway — and developers can still enforce authorization there for microservices, if they like. However, for security, performance and availability, it’s typically preferable to also enforce authorization steps for each microservice API. As mentioned, in a zero-rust architecture, every request must be both authenticated and authorized before it is allowed. It’s entirely possible to send each of these authorization requests to a centralized service. However, this can add significantly to latency — for instance, a single user request might traverse numerous services, and if each of those requests requires an additional network hop to reach that centralized authorization engine, that can hamper the user experience. If you’re using a tool like OPA, fortunately, you can also run a local authorization engine and policy library as a sidecar to each microservice. Here is an example of what this architecture looks like with an Istio service mesh, which uses an Envoy proxy sidecar. Using this model, you can ensure that each request passes muster with an authorization check while maximizing the performance and availability of the service.


Just in time? Bosses are finally waking up to the cybersecurity threat

"Today boards say, 'Can you come and brief our board, and can you stay while the CISO's briefing the board? And can you please give us a view about the quality of our controls and our estimation of risk?', which is hugely transparent," she said, speaking at the UK National Cyber Security Centre's (NCSC) Cyber UK conference in Newport, Wales. "I see that as well, it feels as if it's really maturing," said Lindy Cameron, CEO of the NCSC. "We've been trying really hard over the last few months to get organisations to step up but not panic, do the things we've asked them to for a long time and take it more seriously". The NCSC regularly issues advice to organisations on how to improve and manage cybersecurity issues, ranging from ransomware threats to potential nation state-backed cyberattacks – and Cameron said she's seen a more hands-on approach to cybersecurity from business leaders in recent months. "I've seen chief execs really asking their CISOs the right questions, rather than leaving them to it because they don't have to understand complex technology. It does feel like a much more engaging strategic conversation," she said.


Center for Threat-Informed Defense, Microsoft, and industry partners streamline MITRE ATT&CK® matrix evaluation for defenders

The methodology and insights from the top techniques list has many practical applications, including helping prioritize activities during triage. As it’s applied to more real-world scenarios, we can identify areas of focus and continue to improve our coverage on these TTPs and behaviors of prevalent threat actors. Refining the criteria can further increase results accuracy and make this project more customer-focused and more relevant for their immediate action. ... This collaboration and innovation benefits everyone in the security community, not only those who use the MITRE ATT&CK framework as part of their products and services, but also our valued ecosystem of partners who build services on top of our platform to meet the unique needs of every organization, to advance threat-informed defense in the public interest. Microsoft is a research sponsor at the Center for Threat-Informed Defense, partnering to advance the state of the art in threat-informed defense in the public interest. One of our core principles at Microsoft is security for all, and we will continue to partner with MITRE and the broader community to collaborate on projects like this and share insights and intelligence.


How Waterfall Methodologies Stifle Enterprise Agility

Traditional organizational architecture can impose limitations on an enterprise’s ability to successfully reach its digital transformation goals. The up-front model, with a focus on one long-range project, can slow productivity and choke creativity. While planning is needed as agility scales, the detailed technology life cycles with large timeline projections are no longer effective or profitable in meeting the business mandates that drive enterprises forward. Enterprise leaders are increasingly abandoning the five-year architectural plan for one that is designed to evolve with the ever-changing software development environment. Enterprise architects must now develop and promote adaptive methods that support agility in order to appropriate the value of new technologies like AI, machine learning, big data, IoT and intuitive tools that enable advanced analytics and enterprise-wide collaboration. A less intentional architecture, decomposed into smaller units, can be managed by autonomous cross-functional teams that are accountable to peers and managers with shared strategic objectives, bringing all fields into a coherent whole.


Seven Ways to Fail at Microservices with Holly Cummins

We're starting to use CICD as a noun rather than a verb and we think it's something that we can buy and then put on the shelf and then we have CICD. But if we sort of think about the words in CICD it's continuous integration and continuous delivery or deployment, confusingly. And so what I often see is I'll see teams where they're using feature branches and they'll integrate their feature branch once a week. So that of course is not continuous integration. It's better than every six months, but it's fundamentally not continuous. And really, I think if you're doing continuous integration, which you should be, everybody should be aiming to integrate at least once a day. And that does mean that you have to have some different habits in terms of your code, you sort of need to start coding with the things that aren't visible and then go on to the things that are visible and other things like that. You need to make sure that your quality's in place so that you've got the tests in place first so that you don't accidentally deliver something terrible.



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis