Daily Tech Digest - February 01, 2023

Top 6 roadblocks derailing data-driven projects

Making the challenge of getting sufficient funding for data projects even more daunting is the fact that they can be expensive endeavors. Data-driven projects require a substantial investment of resources and budget from inception, Clifton says. “They are generally long-term projects that can’t be applied as a quick fix to address urgent priorities,” Clifton says. “Many decision makers don’t fully understand how they work or deliver for the business. The complex nature of gathering data to use it efficiently to deliver clear [return on investment] is often intimidating to businesses because one mistake can exponentially drive costs.” When done correctly, however, these projects can streamline and save the organization time and money over the long haul, Clifton says. “That’s why it is essential to have a clear strategy for maximizing data and then ensuring that key stakeholders understand the plan and execution,” he says. In addition to investing in the tools needed to support data-driven projects, organizations need to recruit and retain professionals such as data scientists. 

IoT, connected devices biggest contributors to expanding application attack surface

Along with IoT and connected device growth, rapid cloud adoption, accelerated digital transformation, and new hybrid working models have also significantly expanded the attack surface, the report noted.  ... Inefficient visibility and contextualization of application security risks leave organizations in “security limbo” because they don’t know what to focus on and prioritize, 58% of respondents said. “IT teams are being bombarded with security alerts from across the application stack, but they simply can’t cut through the data noise,” the report read. “It’s almost impossible to understand the risk level of security issues in order to prioritize remediation based on business impact. As a result, technologists are feeling overwhelmed by new security vulnerabilities and threats.” Lack of collaboration and understanding between IT operations teams and security teams is having several negative effects too, the report found, including increased vulnerability to security threats and blind spots, difficulties balancing speed, performance and security priorities, and slow reaction times when addressing security incidents.

Firmware Flaws Could Spell 'Lights Out' for Servers

Five vulnerabilities in the baseboard management controller (BMC) firmware used in servers of 15 major vendors could give attackers the ability to remotely compromise the systems widely used in data centers and for cloud services. The vulnerabilities, two of which were disclosed this week by hardware security firm Eclypsium, occur in system-on-chip (SoC) computing platforms that use AMI's MegaRAC Baseboard Management Controller (BMC) software for remote management. The flaws could impact servers produced by at least 15 vendors, including AMD, Asus, ARM, Dell, EMC, Hewlett-Packard Enterprise, Huawei, Lenovo, and Nvidia. Eclypsium disclosed three of the vulnerabilities in December, but withheld information on two additional flaws until this week in order to allow AMI more time to mitigate the issues. Since the vulnerabilities can only be exploited if the servers are connected directly to the Internet, the extent of the vulnerabilities is hard to measure, says Nate Warfield, director of threat research and intelligence at Eclypsium. 

As the anti-money laundering perimeter expands, who needs to be compliant, and how?

Remember: It’s not just existing criminals you’re looking for, but also people that could become part of a money laundering scheme. One very specific category is politically exposed persons (PEP), which refers to government workers or high-ranking officials at risk of bribery or corruption. Another category is people in sanctioned lists, like Specially Designated Nationals (SDN) composed by the Office of Foreign Assets Control (OFAC). They contain individuals and groups with links to high-risk countries. Extra vigilance is also necessary when dealing with money service businesses (MSB), as they’re more likely to become targets for money launderers. The point of all this is that a good AML program must include a thorough screening system that can detect high-risk customers before bringing them onboard. It’s great if you can stop criminals from accessing your system at all, but sometimes they slip through or influence existing customers. That’s why checking users’ backgrounds for red flags isn’t enough. You need to keep an eye on their current activity, too.

Digital transformation: 4 essential leadership skills

Decisiveness by itself is not enough. A strong technology leader needs to operate with flexibility. The pace of change is no longer linear, and leaders have less time to assess and understand every aspect of a decision. Consequently, decisions are made faster and are not always the best ones. Realizing which decisions are not spot-on and being able to adapt quickly is an example of the type of flexibility a leader needs. Another area leaders should understand is when, how, and from whom to take input when making adjustments. For example, leaders shouldn’t rely solely on customer input to make all product decisions. A flexible leader needs to understand the impact on the development teams and support teams as well. In our experience, teams with decisive and flexible leaders are more accepting of change. This is especially true during transformation. Leaders need to know when and how to be decisive to lead their team to success. In tandem, future-ready leaders can adapt to new information and inputs in today’s fast-paced technology environment.

Pathways to a More Sustainable Data Center

“When building a data center to suit today's needs and the needs 20 years in the future, the location of the facility is a key aspect,” he says. “Does it have space to expand with customer growth? Areas to remediate and replace systems and components? Is it in an area that has an extreme weather event seasonally? Are there ways to bring more power to the facility with this growth?” He says these are just a few of the questions that need to be thought of when deploying and maintaining a data center long term. "Technology may be able to stretch the limits of what’s possible, but sustainability starts with people,” Malloy adds. “Employees that implement and follow data center best practices keep a facility running in peak performance.” He says implementing simple things such as efficient lighting, following management-oriented processes and support-oriented processes for a proper maintenance and part replacement schedule increase the longevity of the facility equipment and increase customer satisfaction. 

Enterprise architecture modernizes for the digital era

Although leading enterprise architects see the need for a tool that better reflects the way they work, they also have concerns. “Provenance and credibility are key, so you risk making the wrong decisions as an enterprise architect if there’s no accuracy in the data,” Gregory says of how EAM tools are reliant on data quality. Winfield agrees, adding: “The difficult bit is getting accurate data into the EAM.” Gartner, in its Magic Quadrant for EA Tools, reports that the EAM sector could face some consolidation, too: “Due to the importance and growth in use of models in modern business, we expect to see some major vendors in adjacent market territories make strategic moves by either buying or launching their own EA tools.” Still, some CIOs question the value of adding EAM tools to their technology portfolio alongside IT service management (ITSM) tools, for example. The Very Group’s Subburaj foresees this being a challenge. “Some business leaders will struggle to see the direct business impact,” he says. 

Career path to CTO – we map out steps to take

Successful CTOs will need a range of skills, including technical but also business attributes. “The ability to advise and steer the technology strategy that is right for the business in the current and changing market conditions is crucial,” says Ryan Sheldrake, field CTO, EMEA, at cloud security firm Lacework. “Spending and investing wisely and in a timely manner is one of the more finessed parts of being a successful CTO.” ... “To achieve a promotion to this level, you need both,” she says. “For most of the CTO assignments we deliver, a solid knowledge base in software engineering, technical, product and enterprise architecture is required, as well as knowledge of cloud technologies and information security. From a leadership perspective, candidates need excellent influencing skills, strategic thinking, commercial management skills, and the gravitas to convey a vision and motivate a team.” There are ways in which individuals can help themselves stand out. “One of the critical things I did that really helped me develop into a CTO was to have an external mentor who was already a CTO,” says Mark Benson, CTO at Logicalis UKI. 

How Good Data Management Enables Effective Business Strategies

Data governance should also not be overlooked as an important component of data management and data quality. Sometimes used interchangeably, there are important differences. If data quality, as we’ve seen, is about making sure that all data owned by an organization is complete, accurate, and ready for business use, data governance, by contrast, is about creating the framework and rules by which an organization will use the data. The main purpose of data governance is to ensure the necessary data informs crucial business functions. It is a continuous process of assessing, often through a data steward, whether data that has been cleansed, matched, merged, and made ready for business use is truly fit for its intended purpose. Data governance rests on a steady supply of high-quality data, with frameworks for security, privacy, permissions, access, and other operational concerns. A data management strategy that encompasses the elements described above with respect to data quality will empower a business environment that can successfully achieve and even surpass business goals – from improving customer and employee experiences to increasing revenue and everything in between.

What Is Policy-as-Code? An Introduction to Open Policy Agent

As business, teams, and maturity progress, we'll want to shift from manual policy definition to something more manageable and repeatable at the enterprise scale. How do we do that? First, we can learn from successful experiments in managing systems at scale:Infrastructure-as-Code (IaC): treat the content that defines your environments and infrastructure as source code. DevOps: the combination of people, process, and automation to achieve "continuous everything," continuously delivering value to end users. Policy as code uses code to define and manage policies, which are rules and conditions. Policies are defined, updated, shared, and enforced using code and leveraging Source Code Management (SCM) tools. By keeping policy definitions in source code control, whenever a change is made, it can be tested, validated, and then executed. The goal of PaC is not to detect policy violations but to prevent them. This leverages the DevOps automation capabilities instead of relying on manual processes, allowing teams to move more quickly and reducing the potential for mistakes due to human error.

Quote for the day:

"Those who are not true leaders will just affirm people at their own immature level." -- Richard Rohr

Daily Tech Digest - January 31, 2023

Microsoft says cloud demand waning, plans to infuse AI into products

Microsoft Azure and other cloud services grew 38% in constant currency terms on a year-on-year basis, slowing down by 4% from the previous sequential quarter. “As I noted earlier, we exited Q2 with Azure growth in the mid-30s in constant currency. And from that, we expect Q3 growth to decelerate roughly four to five points in constant currency,” Amy Hood, chief financial officer at Microsoft, said during an earnings call. The growth in cloud number is expected to slow down further through the year, warned Microsoft Chief Executive Satya Nadella. “As I meet with customers and partners, a few things are increasingly clear. Just as we saw customers accelerate their digital spend during the pandemic, we are now seeing them optimize that spend,” Nadella said during the earnings call, adding that enterprises were exercising caution in spending on cloud. Explaining further about enterprises optimizing their spend, Nadella said that enterprises wanted to get the maximum return on their investment and save expenses to put into new workloads.

Why Software Talent Is Still in Demand Despite Tech Layoffs, Downturn and a Potential Recession

We live in a world run by software programs. With increasing digitization, there will always be a demand for software solutions. In particular, software developers are in high demand within the tech industry. In the age of data, firms need software developers who will analyze the data to create software solutions. They will also use the data to understand user needs, monitor performance and modify the programs accordingly. Software developers have skills that prove them valuable in many industries. As long as an industry needs software solutions, a developer can provide and customize them to the firms that need them. ... Many tech workers suffered a terrible blow in 2022. Their prestigious jobs at giant tech firms vanished, leaving many stranded and confused. However, there is still a significant demand for tech professionals in our technological world, particularly software developers. Software development is the bedrock of the tech industry. Software engineers with valuable skill sets, experience and drive will quickly find other positions and opportunities. 

Cybercrime Ecosystem Spawns Lucrative Underground Gig Economy

Improving defenses have forced attackers to improve their tools and techniques, driving the need for more technical specialists, explains Polina Bochkareva, a security services analyst at Kaspersky. "Business related to illegal activities is growing on underground markets, and technologies are developing along with it," she says. "All this leads to the fact that attacks are also developing, which requires more skilled workers." The underground jobs data highlights the surge in activity in cybercriminal services and the professionalization of the cybercrime ecosystem. Ransomware groups have become much more efficient as they have turned specific facets of operations into services, such as offering ransomware-as-a-service (RaaS), running bug bounties, and creating sales teams, according to a December report. In addition, initial access brokers have productized the opportunistic compromise of enterprise networks and systems, often selling that access to ransomware groups. Such division of labor requires technically skilled people to develop and support the complex features, the Kaspersky report stated.

3 ways to stop cybersecurity concerns from hindering utility infrastructure modernization efforts

Cybersecurity is a priority across industries and borders, but several factors add to the complexity of the unique environment in which utilities operate. Along with a constant barrage of attacks, as a regulated industry, utilities face several new compliance and reporting mandates, such as the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Other security considerations include aging OT, which can be challenging to update and to protect, the lack of control over third-party technologies and IoT devices such as smart home devices and solar panels, and finally, the biggest threat of all: human error. These risk factors put extra pressure on utilities, as one successful attack can have deadly consequences. The instance of a hacker attempting to poison (thankfully unsuccessfully) the water supply in Oldsmar, Florida is one example that comes to mind. Utilities have a lot to contend with even before adding data analytics into the mix. However, it is interesting to point out that consumers are significantly less worried about the privacy of data collected by utilities. 

Why cybersecurity teams are central to organizational trust

No business is an island; it depends on many partners (whether formal business partners or some other relationship) – a fact highlighted by the widespread supply chain challenges across many industries over the past couple of years. The security of software supply chains – which is to say, dependencies on upstream libraries and other code used by organizations in their software – is a topic of considerable focus today up to and including from the U.S. executive branch. It’s still arguably not getting the attention it deserves, though. The aforementioned 2023 Global Tech Outlook report found that, among the funding priorities within security, third-party or supply chain risk management came in at the very bottom, with just 12 percent of survey respondents saying it was a top priority. Deb Golden, who leads Deloitte’s U.S. Cyber and Strategic Risk practice, told the authors that there needs to be more scrutiny over supply chains. “Organizations are accountable for safeguarding information and share a responsibility to respond and manage broader network threats in near real-time,” she said. 

Global Microsoft cloud-service outage traced to rapid BGP router updates

The withdrawal of BGP routes prior to the outage appeared largely to impact direct peers, ThousandEyes said. With a direct path unavailable during the withdrawal periods, the next best available path would have been through a transit provider. Once direct paths were readvertised, the BGP best-path selection algorithm would have chosen the shortest path, resulting in a reversion to the original route. These re-advertisements repeated several times, causing significant route-table instability. “This was rapidly changing, causing a lot of churn in the global internet routing tables,” said Kemal Sanjta, principal internet analyst at ThousandEyes, in a webcast analysis of the Microsoft outage. “As a result, we can see that a lot of routers were executing best path selection algorithm, which is not really a cheap operation from a power-consumption perspective.” More importantly, the routing changes caused significant packet loss, leaving customers unable to reach Microsoft Teams, Outlook, SharePoint, and other applications. 

New analog quantum computers to solve previously unsolvable problems

The essential idea of these analog devices, Goldhaber-Gordon said, is to build a kind of hardware analogy to the problem you want to solve, rather than writing some computer code for a programmable digital computer. For example, say that you wanted to predict the motions of the planets in the night sky and the timing of eclipses. You could do that by constructing a mechanical model of the solar system, where someone turns a crank, and rotating interlocking gears represent the motion of the moon and planets. In fact, such a mechanism was discovered in an ancient shipwreck off the coast of a Greek island dating back more than 2000 years. This device can be seen as a very early analog computer. Not to be sniffed at, analog machines were used even into the late 20th century for mathematical calculations that were too hard for the most advanced digital computers at the time. But to solve quantum physics problems, the devices need to involve quantum components. The new Quantum 

Will Your Company Be Fined in the New Data Privacy Landscape?

“Some large US companies are continuing to be dealt pretty significant fines,” she says. “The regulation and fining of companies like Meta and others have raised consumer awareness of privacy rights. I think we’re approaching a perfect storm in the US where the rest of the world is moving toward a more consumer-protective landscape, so the US is following in suit.” This includes activity by state policymakers as well as responses to cybersecurity breaches, Simberkoff says. She sees the conversation on data privacy being driven by increasingly complex regulatory requirements and consumer awareness of data privacy, which can include identity theft or stolen credit card information. “I think, frankly, companies like Apple help that dialogue forward because they’ve made privacy one of their key issues in advertising,” says Simberkoff. The elevation of data privacy policies and consumer awareness might, at first blush, seem detrimental to data-driven businesses, but it could just require new operational approaches. “I think what we’re going to end up seeing is a different way of thinking about these things,” she says. 

What is the role of a CTO in a start-up?

The role of the CTO in a start-up can vary greatly from an equivalent position in a more established scale-up business. While in both scenarios the position concerns leadership of all technological decisions within a business, there are considerable differences in the focus and nature of the role. “Start-ups tend to be disruptive and faced-paced, with the goal of quick growth over long-term strategy development. So, start-up CTOs are often responsible for building the technological infrastructure from the ground up,” said Ryan Jones, co-founder of OnlyDataJobs. “Whereas in an established company, a CTO might be responsible for reviewing and improving the current technology stack and data infrastructure, in a start-up, these structures might not exist. So, the onus is on the CTO to create and implement an entire technological infrastructure and strategy. This also means that a hands-on approach is required. “Because start-up CTOs may be the only technologically minded individual within the company, they’re often required to go back on the tools and do the actual work required themselves rather than delegating to a team.”

Your Tech Stack Doesn’t Do What Everyone Needs It To. What Next?

IT needs to collaborate with citizen developers throughout the process to ensure maximum safety and efficiency. From the beginning, it’s important to confirm the team’s overall approach, select the right tools, establish roles, set goals, and discuss when citizen developers should ask for support from IT. Appointing a leader for the citizen developer program is a great way to help enforce these policies and hold the team accountable for meeting agreed-upon milestones. To encourage collaboration and make citizen automation a daily practice, it’s important to work continuously to identify pain points and manual work within business processes that can be automated. IT should regularly communicate with teams across the business, finance and HR departments to find opportunities for automation, clearly mapping out what change would look like for those impacted. Gaining buy-in from other team leaders is critical, so citizen developers and IT need to become internal advocates for the benefits of automation.Another non-negotiable ground rule is that citizen developers should only use IT-sanctioned tools platforms. 

Quote for the day:

"If a window of opportunity appears, don't pull down the shade." -- Tom Peters

Daily Tech Digest - January 30, 2023

How to survive below the cybersecurity poverty line

All types of businesses and sectors can fall below the cybersecurity poverty line for different reasons, but generally, healthcare, start-ups, small- and medium-size enterprises (SMEs), education, local governments, and industrial companies all tend to struggle the most with cybersecurity poverty, says Alex Applegate ... These include wide, cumbersome, and outdated networks in healthcare, small IT departments and immature IT processes in smaller companies/start-ups, vast network requirements in educational institutions, statutory obligations and limitations on budget use in local governments, and custom software built around specific functionality and configurations in industrial businesses, he adds. Critical National Infrastructure (CNI) firms and charities also commonly find themselves below the cybersecurity poverty line, for similar reasons. The University of Portsmouth Cybercrime Awareness Clinic’s work with SMEs for the UK National Cyber Security Centre (NCSC) revealed that cybersecurity was a secondary issue for most micro and small businesses it engaged with, evidence that it is often the smallest companies that find themselves below the poverty line, Karagiannopoulos says.

The Importance of Testing in Continuous Deployment

Test engineers are usually perfectionists (I speak from my experience), that’s why it’s difficult for them to take a risk of issues possibly reaching end users. This approach has a hefty price tag and impacts the speed of delivery, but it’s acceptable if you deliver only once or twice per month. The correct approach would be automating critical paths in application both from a business perspective and application reliability. Everything else can go to production without thorough testing because with continuous deployment, you can fix issues within hours or minutes. For example, if item sorting and filtering stops working in production, users might complain, but the development team could fix this issue quickly. Would it impact business? Probably not. Would you lose a customer? Probably not. These are the risks that should be OK to take if you can quickly fix issues in production. Of course, it all depends on the context – if you’re providing document storing services for legal investigations, it would be a good idea to have an automated test for sorting and filtering.

Why Trust and Autonomy Matter for Cloud Optimization

With organizations beginning to ask teams to do more with less, optimization — of all kinds — is going to become a vital part of what technology teams (development and operations alike) have to do. But for that to be really effective, team autonomy also needs to be founded on confidence — you need to know that what you’re investing time, energy and money on makes sense from the perspective of the organization’s wider goals. Fortunately, Spot can help here too. It gives teams the data they need to make decisions about automation, so they can prioritize according to what matters most from a strategic perspective. “People aren’t really sure what’s going to be happening six, nine, 10 months down the road.” Harris says. “Making it easier for people to get that actionable data no matter what part of the business you’re in, so that you can go in and you can say, ‘Here’s what we’re doing right, here’s where we can optimize’ — that’s a big focus for us.” One of the ways that Spot enables greater autonomy is with automation features. 

Keys to successful M&A technology integration

For large organisations merging together, unifying networks and technologies may take years. But for SMBs (small and medium-sized businesses) utilising more traditional technologies uch as VPNs, integrations may be accomplished more quickly and with less friction. In scenarios where both the acquiring company and the company being acquired utilise more sophisticated SD-WAN networks, these technologies tend to be closed and proprietary in nature. Therefore, if both companies utilise the same vendor, integration can be managed more easily. On the other hand, if the vendors differ, it is not going to interlink with other networks as easily and needs a more careful step-by-step network transformation plan. ... Another key to a successful technology merger is to truly understand where your applications are going. For example, if two New York companies are joining forces, with most of the data and applications residing in the US East Coast, it wouldn’t make sense to interconnect networks in San Francisco. Along with this, it is important to make sure your regional networks are strong, even within your global network. In terms of where you are sending your traffic and data, it’s important to be as efficient as possible.

Understanding service mesh?

Service meshes don’t give an application’s runtime environment any additional features. Service meshes are unique in that they abstract the logic governing service-to-service communication to an infrastructure layer. This is accomplished by integrating a service mesh as a collection of network proxies into an application. proxies are frequently used to access websites. Typically, a company’s web proxy receives requests for a web page and evaluates them for security flaws before sending them on to the host server. Prior to returning to the user, responses from the page are also forwarded to the proxy for security checks. ... But service mesh is an essential management system that helps all the different containers to work in harmony. Here are several reasons why you will want to implement service mesh in an orchestration framework environment. In a typical orchestration framework environment, user requests are fulfilled through a series of steps, where each of the steps is performed by a container Each one runs a service that plays a different but vital role in fulfilling that request. Let us call this role played by each container a business logic.

Chaos Engineering: Benefits of Building a Test Strategy

Many organizations struggle to get visibility into where their most sensitive data is stored. Improper handling of that data can have disastrous consequences, such as compliance violations or trade secrets falling into the wrong hands. “Using chaos engineering could help identify vulnerabilities that, unless remediated, could be exploited by bad actors within minutes,” Benjamin says. Kelly Shortridge, senior principal of product technology at Fastly, says organizations can use chaos engineering to generate evidence of their systems’ resilience against adverse scenarios, like attacks. “By conducting experiments, you can proactively understand how failure unfolds, rather than waiting for a real incident to occur,” she says. The very nature of experiments requires curiosity -- the willingness to learn from evidence -- and flexibility so changes can be implemented based on that evidence. “Adopting security chaos engineering helps us move from a reactive posture, where security tries to prevent all attacks from ever happening, to a proactive one in which we try to minimize incident impact and continuously adapt to attacks,” she notes.

How to get buy-in on new technology: 3 tips

When making a case for new technology, keep your audience in mind. Tailoring your arguments to their role and goals will put you in a much better position to capture their attention and generate enthusiasm. Sometimes this will require you to shift away from strict business goals. If you need to speak with the chief revenue officer and are trying to justify an additional $100,000 for your tech stack, for example, you will need to focus on the bottom line and the financial benefit your proposal could provide. On the other hand, the head of engineering might not be interested in the finances and would rather discuss how engineers can better avoid burnout or otherwise become easier to manage. When advocating for stack improvements, working with a partner helps substantially. It’s good to have a boss or teammate help, but even better to find a leader on a different team or even in another department. If multiple departments have team members who champion a specific improvement, it makes a strong case that there’s a pervasive need for stack enhancements across the entire company.

How organizations can keep themselves secure whilst cutting IT spending

The zero trust network access model has been a major talking point for CIOs, CISOs and IT professionals for some time. While most organizations do not fully understand what zero trust is, they recognize the importance of the initiative. Enforcing principles of least privilege minimizes the impact of an attack. In a zero trust model, an organization can authorize access in real-time based on information about the account they have collected over time. To make such informed decisions, security teams need accurate and up-to-date user profiles. Without it, security teams can’t be 100% confident that the user gaining access to a critical resource isn’t a threat. However, with the sprawl of identity data – stored in the cloud and legacy systems – of which are unable to communicate with each other, such decisions cannot be made accurately. Ultimately, the issue of identity management isn’t only getting more challenging with the digitalization of IT and migration to the cloud – it’s now also halting essential security projects such as zero trust implementation.

Economic headwinds could deepen the cybersecurity skills shortage

Look at anyone’s research and you’ll see that more organizations are turning to managed services to augment overburdened and under-skilled internal security staff. For example, recent ESG research on security operations indicates that 85% of organizations use some type of managed detection and response (MDR) service, and 88% plan to increase their use of managed services in the future. As this pattern continues, managed security service providers (MSSPs) will need to add headcount to handle increasing demand. Since service provider business models are based on scaling operations through automation, they will calculate a higher return on employee productivity and be willing to offer more generous compensation than typical organizations. One aggressive security services firm in a small city could easily gain a near monopoly on local talent. At the executive level, we will also see increasing demand for the services of virtual CISOs (vCISOs) to create and manage security programs in the near term.

2023 Will Be the Year FinOps Shifts Left Toward Engineering

By enabling developers to adopt using dynamic logs for troubleshooting issues in production without the need to redeploy and add more costly logs and telemetry, developers can own the FinOps cost optimization responsibility earlier in the development cycle and shorten the cost feedback loop. Dynamic logs and developer native observability that are triggered from the developer development environment (IDE) can be an actionable method to cut overall costs and better facilitate cross-team collaboration, which is one of the core principles of FinOps. “FinOps will become more of an engineering problem than it was in the past, where engineering teams had fairly free reign on cloud consumption. You will see FinOps information shift closer to the developer and end up part of pull-request infrastructure down the line,” says Chris Aniszczyk, CTO at the Cloud Native Computing Foundation. Keep in mind that it’s not always easy to prioritize and decide when to pull the cost optimization trigger. 

Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik