Daily Tech Digest - July 20, 2022

CIOs contend with rising cloud costs

“A lot of our clients are stuck in the middle,” says Ashley Skyrme, senior managing director and leader of the Global Cloud First Strategy and Consulting practice at Accenture. “The spigot is turned on and they have these mounting costs because cloud availability and scalability are high, and more businesses are adopting it.” And as the migration furthers, cloud costs soon rank second — next only to payroll — in the corporate purse, experts say. The complexity of navigating cloud use and costs has spawned a cottage industry of SaaS providers lining up to help enterprises slash their cloud bills. ... “Cloud costs are rising,” says Bill VanCuren, CIO of NCR. “We plan to manage within the large volume agreement and other techniques to reduce VMs [virtual machines].” Naturally, heavy cloud use is compounding the costs of maintaining or decommissioning data centers that are being kept online to ensure business continuity as the migration to the cloud continues. But more significant to the rising cost problem is the lack of understanding that the compute, storage, and consumption models on the public cloud are varied, complicated, and often misunderstood, experts say.


How WiFi 7 will transform business

In practice, WiFi 7 might not be rolled out for another couple of years — especially as many countries have yet to delicense the new 6GHz spectrum for public use. However, it is coming, and so it’s important to plan for this development as plans could progress quicker than we first thought. In the same way as bigger motorways are built and traffic increases to fill them, faster, more stable WiFi will encourage more usage & users, and to quote the popular business mantra: “If you build it…they will come….”. WiFi 7 is a significant improvement over all the past WiFi standards. It uses the same spectrum chunks as WiFi 6/6e, and can deliver data more than twice as fast. It has a much wider bandwidth for each channel as well as a raft of other improvements. It is thought that WiFi 7 could deliver speeds of 30 gigabits per second (Gbps) to compatible devices and that the new standard could make running cables between devices completely obsolete. It’s now not necessarily about what you can do with the data, but how you actually physically interact with it. 


How to Innovate Fast with API-First and API-Led Integration

Many have assembled their own technologies as they have tried to deliver a more productive, cloud native platform-as-a-shared-service that different teams can use to create, compose and manage services and APIs. They try to combine integration, service development and API-management technologies on top of container-based technologies like Docker and Kubernetes. Then they add tooling on top to implement DevOps and CI/CD pipelines. Afterward comes the first services and APIs to help expose legacy systems via integration, for example. When developers have access to such a platform within their preferred tools and can reuse core APIs instead of spending time on legacy integration, it means they can spend more time on designing and building the value-added APIs faster. At best, a group can use all the capabilities because it spreads the adoption of best practices, helps get teams ramped up faster and makes them deliver quicker. But at the very least, APIs should be shared and governed together.


Using Apache Kafka to process 1 trillion inter-service messages

One important decision we made for the Messagebus cluster is to only allow one proto message per topic. This is configured in Messagebus Schema and enforced by the Messagebus-Client. This was a good decision to enable easy adoption, but it has led to numerous topics existing. When you consider that for each topic we create, we add numerous partitions and replicate them with a replication factor of at least three for resilience, there is a lot of potential to optimize compute for our lower throughput topics. ... Making it easy for teams to observe Kafka is essential for our decoupled engineering model to be successful. We therefore have automated metrics and alert creation wherever we can to ensure that all the engineering teams have a wealth of information available to them to respond to any issues that arise in a timely manner. We use Salt to manage our infrastructure configuration and follow a Gitops style model, where our repo holds the source of truth for the state of our infrastructure. To add a new Kafka topic, our engineers make a pull request into this repo and add a couple of lines of YAML. 


Load Testing: An Unorthodox Guide

A common shortcut is to generate the load on the same machine (i.e. the developer’s laptop), that the server is running on. What’s problematic about that? Generating load needs CPU/Memory/Network Traffic/IO and that will naturally skew your test results, as to what capacity your server can handle requests. Hence, you’ll want to introduce the concept of a loader: A loader is nothing more than a machine that runs e.g. an HTTP Client that fires off requests against your server. A loader sends n-RPS (requests per second) and, of course, you’ll be able to adjust the number across test runs. You can start with a single loader for your load tests, but once that loader struggles to generate the load, you’ll want to have multiple loaders. (Like 3 in the graphic above, though there is nothing magical about 3, it could be 2, it could be 50). It’s also important that the loader generates those requests at a constant rate, best done asynchronously, so that response processing doesn’t get in the way of sending out new requests. ... Bonus points if the loaders aren’t on the same physical machine, i.e. not just adjacent VMs, all sharing the same underlying hardware. 


Open-Source Testing: Why Bug Bounty Programs Should Be Embraced, Not Feared

There are two main challenges: one around decision-making, and another around integrations. Regarding decision-making, the process can really vary according to the project. For example, if you are talking about something like Rails, then there is an accountable group of people who agree on a timetable for releases and so on. However, within the decentralized ecosystem, these decisions may be taken by the community. For example, the DeFi protocol Compound found itself in a situation last year where in order to agree to have a particular bug fixed, token-holders had to vote to approve the proposal. ... When it comes to integrations, these often cause problems for testers, even if their product is not itself open-source. Developers include packages or modules that are written and maintained by volunteers outside the company, where there is no SLA in force and no process for claiming compensation if your application breaks because an open-source third party library has not been updated, or if your build script pulls in a later version of a package that is not compatible with the application under test.


3 automation trends happening right now

IT automation specifically continues to grow as a budget priority for CIOs, according to Red Hat’s 2022 Global Tech Outlook. While it’s outranked as a discrete spending category by the likes of security, cloud management, and cloud infrastructure, in reality, automation plays an increasing role in each of those areas. ... While organizations and individuals automate tasks and processes for a bunch of different reasons, the common thread is usually this: Automation either reduces painful (or simply boring) work or it enables capabilities that would otherwise be practically impossible – or both. “Automation has helped IT and engineering teams take their processes to the next level and achieve scale and diversity not possible even a few years ago,” says Anusha Iyer, co-founder and CTO of Corsha. ... Automation is central to the ability to scale – quickly, reliably, and securely – distributed systems, whether viewed from an infrastructure POV (think hybrid cloud and multi-cloud operations), application architecture POV, security POV, or though virtually any other lens. Automation is key to making it work.


CIO, CDO and CTO: The 3 Faces of Executive IT

Most companies lack experience with the CDO and CTO positions. This makes these positions (and those filling them) vulnerable to failure or misunderstanding. The CIO, who has supervised most of the responsibilities that the CDO and CTO are being assigned, can help allay fears, and benefit from the cooperation, too. This can be done by forging a collaborative working partnership with both the CDO and CTO, which will need IT’s help. By taking a pivotal and leading role in building these relationships, the CIO reinforces IT’s central role, and helps the company realize the benefits of executive visibility of the three faces of IT: data, new technology research, and developing and operating IT business operations. Many companies opt to place the CTO and CDO in IT, where they report to the CIO. Sometimes this is done upfront. Other times, it is done when the CEO realizes that he/she doesn't have the time or expertise to manage three different IT functions.. This isn't a bad idea since the CIO already understands the challenges of leveraging data and researching new technologies.


Log4j: The Pain Just Keeps Going and Going

Why is Log4j such a persistent pain in the rump? First, it’s a very popular, open source Java-based logging framework. So it’s been embedded into thousands of other software packages. That’s no typo. Log4j is in thousands of programs. Adding insult to injury, Log4j is often deeply embedded in code and hidden from view due to being called in by indirect dependencies. So, the CSRB stated that “Defenders faced a particularly challenging situation; the vulnerability impacted virtually every networked organization, and the severity of the threat required fast action.” Making matters worse, according to CSRB, “There is no comprehensive ‘customer list’ for Log4j or even a list of where it is integrated as a subsystem.”  ... The pace, pressure, and publicity compounded the defensive challenges: security researchers quickly found additional vulnerabilities in Log4j, contributing to confusion and ‘patching fatigue’; defenders struggled to distinguish vulnerability scanning by bona fide researchers from threat actors, and responders found it difficult to find authoritative sources of information on how to address the issues,” the CSRB said.


Major Takeaways: Cyber Operations During Russia-Ukraine War

The operational security expert known as the grugq says Russia did disrupt command-and-control communications - but the disruption failed to stymie Ukraine's military. The government had reorganized from a "Soviet-style" centralized command structure to empower relatively low-level military officers to make major decisions, such as blowing up runways at strategically important airports before they were captured by Russian forces. Lack of contact with higher-ups didn't compromise the ability of Ukraine's military to physically defend the country. ... Another surprising development is the open involvement of Western technology companies in Ukraine's cyber defense, WithSecure's Hypponen says. "I'm surprised by the fact that Western technology companies like Microsoft and Google are there on the battlefield, supporting Ukraine against governmental attacks from Russia, which is again, something we've never seen in any other war." Western corporations aren't alone, either. Kyiv raised a first-ever volunteer "IT Army," consisting of civilians recruited to break computer crime laws in aid of the country's military defense.



Quote for the day:

"Leadership is a way of thinking, a way of acting and, most importantly, a way of communicating." -- Simon Sinek

Daily Tech Digest - July 19, 2022

Open source isn’t working for AI

It’s hard to trust AI if we don’t understand the science inside the machine. We need to find ways to open up that infrastructure. Loukides has an idea, though it may not satisfy the most zealous of free software/AI folks: “The answer is to provide free access to outside researchers and early adopters so they can ask their own questions and see the wide range of results.” No, not by giving them keycard access to Facebook’s, Google’s, or OpenAI’s data centers, but through public APIs. It’s an interesting idea that just might work. But it’s not “open source” in the way that many desire. That’s probably OK. ... Because open source is inherently selfish, companies and individuals will always open code that benefits them or their own customers. Always been this way, and always will. To Loukides’ point about ways to meaningfully open up AI despite the delta between the three AI giants and everyone else, he’s not arguing for open source in the way we traditionally did under the Open Source Definition. Why? Because as fantastic as it is (and it truly is), it has never managed to answer the cloud open source quandary—for both creators and consumers of software—that DiBona and Zawodny laid out at OSCON in 2006.


Botnet malware disguises itself as password cracker for industrial controllers

What's weird is that the malware also deploys code to check the clipboard contents for cryptocurrency wallet addresses, and silently rewrites those details to point to another wallet so as to steal people's funds. Remember, this is running on PCs normally connected to industrial equipment, so perhaps the crooks behind this caper just grabbed some generic nasty to use. "Dragos assesses with moderate confidence the adversary, while having the capability to disrupt industrial processes, has financial motivation and may not directly impact Operational Technology (OT) processes," the team wrote. The Sality malware family has been around for almost two decades, first being detected in 2003, and can be commanded by its masterminds to perform other malicious actions, such as attacking routers, F-Secure analysts wrote in a report. Sality maintains persistence on the host PC through process injection and file infection, and abusing Windows' autorun functionality to spread copies of itself over USB, network shares, and external storage drives, according to Dragos.


Rescale and Nvidia partner to automate industrial metaverse

The new partnership between Rescale and Nvidia will allow enterprises to connect workflows between Rescale’s existing catalog of engineering and scientific containers, Nvidia’s extensive NGC offerings, and enterprises’ standard containers of their own models and supporting software. This new containerized approach to engineering software means teams can specify the software libraries and configurations that reflect industry best practices. The recent Nvidia and Siemens partnership is an ambitious effort to bring together physics-based digital models and real-time AI. Rescale’s announcement with Nvidia enhances this partnership, as accelerated computing combined with high-performance computing is the foundation that powers these use cases. For example, enterprises can take advantage of Nvidia’s work on Modulus, which uses AI to speed up physics simulations hundreds or thousands of times. Siemens estimates that integrating physics and AI models could help save the power industry $1.7 billion in reduced turbine maintenance. The partnership could also make it easier for companies to integrate other apps that work on these tools.


Uber Files leak shows why India’s approach to security and privacy matters

In the Uber Files investigation led by The Guardian under the International Consortium of Investigative Journalists or ICAJ, the leaked documents provide evidence of law breaking, lobbying world leaders, using stealth technologies to evade raids and opaque algorithms deployed by the Uber Corporation in 2012-16. In the documents, there have been instances where Uber executives have sanctioned the use of stealth technologies like the ‘Kill Switch’ to evade regulations and efforts of investigative agencies for a fair probe in India, Belgium and other countries. On similar lines, reports indicate that e-commerce giant Amazon spent more than Rs 8,000 crore in India on legal fees in 2018-20. There are numerous incidents like these where regulators are in a Catch-22 situation of regulation and innovation. The Uber Files tell how technology platforms deploy a multi-pronged strategy to subvert public opinion with sponsored academic work, allying with public officials, and wilfully stifling investigations of law enforcement agencies to dodge regulatory efforts for better transparency, accountability and public scrutiny of their architecture.


Is Microsoft’s VS Code really open source?

“Microsoft modifies VS Code in a way that a non-Microsoft VS Code fork can’t use extensions from the official Microsoft VS Code store. Not only that, some of the VS Code extensions developed and released by Microsoft will only work in the VS Code released by Microsoft and won’t work on non-Microsoft VS Code forks,” mentioned Ranatunge in his blog post. Microsoft has made similar moves in the past. It modified the open-source cross-platform IDE MonoDevelop as Visual Studio for Mac. The Visual Studio for Mac has three versions- for students, professionals and enterprises. While the students’ version is free and supports classroom learning, individual developers and small companies must log in via IDE to access the other versions. In 2021, Microsoft abruptly removed the Hot Reload functionality from the open-source .NET SDK, only to revoke it later as it had enraged the .NET community. As stated, Microsoft follows an open-core model for VS code. Therefore, developers who want access to the full open source code that is MIT licensed will have to download the code from the repository and then build the VS code on their own.


Open source security needs automation as usage climbs amongst organisations

"OSS is not insecure per se…the challenge is with all the versions and components that may make up a software project," he explained. "It is impossible to keep up without automation and prioritisation." He noted that the OSS community was responsive in addressing security issues and deploying fixes, but organisations tapping OSS would have to navigate the complexity of ensuring their software had the correct, up-to-date codebase. This was further compounded by the fact that most organisations would have to manage many projects concurrently, he said, stressing the importance of establishing a holistic software security strategy. He further pointed to the US National Institute of Standards and Technology (NIST), which offered a software supply chain framework that could aid organisations in planning their OSS security response. Asked if regulations were needed to drive better security practices, Liu said most companies saw cybersecurity as a cost and would not want to address it actively in the absence of any incentive.


How To Minimize the Impacts of Shadow IT on Your Business

Organizations looking to manage and mitigate the negative impacts of shadow IT must first perform an internal audit. Cloud security applications such as Microsoft’s Cloud App Security detect unsanctioned usage of applications and data. But detecting shadow IT is only one part of the equation. Companies should work to address the root causes. This may include optimizing communications between departments – particularly the IT team and other departments. If one department discovers a software solution that may be beneficial, they should feel comfortable approaching the IT team. CIOs and IT staff should develop processes that allow them to streamline software assessment and procurement. They should be able to give in-depth reasons why a particular tool suggested by a non-IT employee may be impracticable. Additionally, it is recommended that IT staff suggest a better alternative if they reject a proposed tool. Organizations should consider training non-IT staff in cybersecurity literacy and awareness. 


Gatling vs JMeter - What to Use for Performance Testing

There's a saying that every performance tester should know: "lies, damn lies, and statistics." If they don't know it yet, they will surely learn it in a painful way. A separate article could be written about why this sentence should be the mantra in the performance test area. In a nutshell: median, arithmetic mean, standard deviation are completely useless metrics in this field (you can use them only as an additional insight). You can get more detail on that in this great presentation by Gil Tene, CTO and co-founder at Azul. Thus, if the performance testing tool only provides this static data, it can be thrown right away. The only meaningful metrics to measure and to compare performance are the percentiles. However, you should also use them with some suspicion about how they were implemented. Very often the implementation is based on the arithmetic mean and standard deviation, which, of course, makes them equally useless. ... Another approach would be to check the source code of implementation yourself. I regret that most of the performance test tools documentation does not cover how percentiles are calculated. 


BlackCat Adds Brute Ratel Pentest Tool to Attack Arsenal

Sophos investigators found that the attacker used commercially available tools such as AnyDesk and TeamViewer and also installed nGrok, an open-source remote access tool. "The attackers also used PowerShell commands to download and execute Cobalt Strike beacons on some machines, and a tool called Brute Ratel, which is a more recent pen-testing suite with Cobalt Strike-like remote access features," Brandt says. Sophos researchers found that the Brute Ratel binary was installed as a Windows service named wewe in an affected machine. One of the bigger challenges for the Sophos investigators was that some of the targeted organizations were running the same servers that were compromised using the Log4j vulnerability. Apart from ransoming systems on the network, the threat actors collected and exfiltrated sensitive data from the targets and uploaded large volumes of data to Mega, a cloud storage provider. The attackers used a third-party tool called DirLister to create a list of accessible directories and files, or in some cases used a PowerShell script from a pen tester toolkit, called PowerView.ps1, to enumerate the machines on the network. 


Removing the blind spots that allow lateral movement

One of the biggest challenges of lateral movement detection is its low anomaly factor. Lateral movement attacks exploit the gaps in an organization’s user authentication process. Such attacks tend to remain undetected because the authentication performed by the attacker is essentially identical to the authentication made by a legitimate user. Following the initial “patient zero” compromise, the attacker uses valid credentials to log in to organizational systems or applications. Therefore, the standard IAM infrastructure in place legacy cannot detect any anomaly during this process, which allows attackers to slip through and remain in the network undetected. Another key challenge is the potential mismatch or disparity between endpoint and identity protection aspects. Endpoint protection solutions are mainly focused on detecting anomalies in file and process execution. However, the attacker gains access by exploiting the legitimate authentication infrastructure, utilizing legitimate files and process. Therefore, it doesn’t appear on the radar of endpoint solutions.



Quote for the day:

"Sport fosters many things that are good; teamwork and leadership" -- Daley Thompson

Daily Tech Digest - July 18, 2022

Cyber Safety Review Board warns that Log4j event is an “endemic vulnerability”

According to the report, "The pace, pressure, and publicity compounded the defensive challenges." As a result, researchers found additional vulnerabilities in Log4j, contributing to confusion and "patching fatigue," and "responders found it difficult to find authoritative sources of information on how to address the issues. This frenetic period culminated in one of the most intensive cybersecurity community responses in history." ... The few organizations that responded effectively to the event "understood their use of Log4j and had technical resources and mature processes to manage assets, assess risk, and mobilize their organization and key partners to action. Most modern security frameworks call out these capabilities as best practices." ... A fog still hovers over the event because, "No authoritative source exists to understand exploitation trends across geographies, industries or ecosystems. Many organizations do not even collect information on specific Log4j exploitation, and reporting is still largely voluntary. Most importantly, however, the Log4j event is not over."


DTN’s CTO on combining IT systems after a merger

Enterprises often make strategic errors when combining IT systems following an acquisition, Ewe says. “The number one mistake I see is, ‘Since we acquired you, clearly we win,’” he says. “Just because A bought B, you don’t want to assume that A has better technology than B.” Another common mistake is to go solely by the numbers, picking one company’s IT system over the other’s because it has the highest revenue or profitability, he says: “The issue there is that you’re oversimplifying the process.” Given the investment in time and money necessary to merge two companies’ IT systems, “it’s worthwhile spending an extra few weeks up-front to make a more thorough analysis of which solution or which pieces of which solutions should come together,” Ewe says. Jumping straight in and making a wrong decision can cost more in the long term. Ewe consulted with product and sales management, and with customers, to identify the needs DTN’s single engine would have to satisfy, as well as the use cases it would serve, before evaluating the existing assets against those needs. 


Ransomware and backup: Overcoming the challenges

Recovering data after a ransomware attack is more complex and more risky than recovery from a system outage or natural disaster. The greatest risk is that backups contain undetected ransomware, which then replicate into the production system or recovered systems. This risk is reduced by using air-gapped copies and immutable copies and snapshots, and keeping more copies than would be required for conventional backup alone. This requires a more cautious approach to data recovery, and one that can be at odds with the commercial pressures for short RTOs and recent RPOs. Matters are made more difficult because there are no viable, fool-proof systems that can scan data for ransomware before it is backed up, says Barnaby Mote, managing director at backup specialist Databarracks. “Before ransomware was a thing, replicating data from production systems to DR as quickly as possible was a sound recovery strategy for conventional disasters,” he says. “Now, with ransomware, it has the opposite of the desired effect, rendering recovery systems unusable.”


Continuous Intelligence: Definition, Benefits, and Examples

While humans cannot inspect every possible characteristic and combination in the flood of incoming data, machines can. Complementing analytics that provide precise answers to questions users know to ask, a machine can continuously monitor data in the background to detect unknown correlations and trends that deviate from what would have been expected by the system based on previous observations. This way, companies can identify hidden, but potentially relevant signals in the data. Gartner predicts that by 2022, more than half of major new business systems will incorporate continuous intelligence capabilities. By integrating artificial intelligence (AI)-based continuous intelligence into their day-to-day operations, companies can:Boost efficiency by spending less time sifting through data from a variety of disparate sources; Focus on what really matters for their business; Speed time to action. By automatically inspecting critical business health indicators such as revenue, Web page views, active users, or transaction volume in real time, businesses can accelerate their time to insight and action and better respond to situations before business is impacted.


7 reasons Java is still great

As a longtime Java programmer, it was surprising—astonishing, actually—to watch the language successfully incorporate lambdas and closures. Adding functional constructs to an object-oriented programming language was a highly controversial and impressive feat. So was absorbing concepts introduced by technologies like Hibernate and Spring (JSR 317 and JSR 330, respectively) into the official platform. That such a widely used technology can still integrate new ideas is heartening. Java's responsiveness helps to ensure the language incorporates useful improvements. it also means that developers know they are working within a living system, one that is being nurtured and cultivated for success in a changing world. Project Loom—an ambitious effort to re-architect Java’s concurrency model—is one example of a project that underscores Java's commitment to evolving. Several other proposals currently working through the JCP demonstrate a similar willingness to go after significant goals to improve Java technology. The people working on Java are only half of the story. The people who work with it are the other half, and they are reflective of the diversity of Java's many uses.


Search Here: Ransomware Groups Refine High-Pressure Tactics

Ransomware groups continue to refine the tactics they use to better pressure victims into paying. And they're succeeding. "In recent months, we have seen an increase in the number of ransomware attacks and ransom amounts being paid," the heads of Britain's lead cybersecurity agency and privacy watchdog warned last week in an open letter to the legal industry. The impetus for the alert from Britain's National Cyber Security Center - the public-facing arm of intelligence agency GCHQ - and the Information Commissioner's Office: They're urging solicitors to never advise clients to pay a ransom. Doing so will not lessen any penalties the ICO might levy, helps perpetuate the ransomware business model and could violate U.S. sanctions, they say. But the increase in ransoms being paid speaks to the success of ransomware groups' continuing innovation. Psychological pressure remains a specialty. After infecting systems, many types of ransomware reboot infected PCs to a lock screen that lists the ransom demand, a cryptocurrency wallet address for routing funds and a countdown timer. 


Functional programming is finally going mainstream

For some, using an object-oriented language like Java, JavaScript, or C# for functional programming can feel like swimming upstream. “A language can steer you towards certain solutions or styles of solutions,” says Gabriella Gonzalez, an engineering manager at Arista Networks. “In Haskell, the path of least resistance is functional programming. You can do functional programming in Java, but it’s not the path of least resistance.” A bigger issue for those mixing paradigms is that you can’t expect the same guarantees you might receive from pure functions if your code includes other programming styles. “If you’re writing code that can have side effects, it’s not functional anymore,” Williams says. “You might be able to rely on parts of that code base. I’ve made various functions that are very modular, so that nothing touches them.” Working with strictly functional programming languages makes it harder to accidentally introduce side effects into your code. “The key thing about writing functional programming in something like C# is that you have to be careful because you can take shortcuts and then you’ve got the exact sort of mess you would have if you weren’t using functional programming at all,” Louth says.


Safeguarding the open source model amidst big tech involvement

Two of the main techniques to safeguard open source and its community are through smart licensing tactics and constant innovation. The first technique is to simply switch the project licence from an open source licence to a more restrictive licence. There are two specific licences that can be used to protect against clouds and corporations: AGPL-3 and SSPL — specifically developed by the likes of MongoDB, Elastic and Grafana to protect themselves from AWS. For instance, while many projects shifted away from GPL-style licences towards more permissive forms of licensing, under GPL, contributors are required to make their code available to the open source community; the so-called “copyleft”. This traditional licensing style helps to create a more open, transparent ecosystem. Another way in which open source can safeguard its future is through smart innovations. Constantly innovating in order to satisfy users should be the way forward for the evolution of open source projects and solutions. This would enable companies to maintain their competitive edge and keep up with technological trends. 


5 ways fear can derail your digital transformation strategy

When we confront new work technologies such as a hybrid workplace, virtual meeting rooms, or new software, we tend to resist or avoid them simply because they’re new and we’re not used to them. This creates division. A company looking to offer a hybrid workplace might encounter resistance from employees, managers, and even customers who refuse to recognize this arrangement. What appears to be a simple reluctance to change is actually a deep-seated fear of changing a comfortable status quo. What you can do about this: Offer facts to neutralize fear. People often use their own frame of reference if they are not given something tangible to hold on to. If the change involves new technology, demonstrate the technology. Let them see how it works. If the change is organizational, such as a hybrid workspace, present the facts about how it will work, what will change, and what will stay the same. Listen to and respond to their questions and objections. Humans are dominated by emotion, and logic is always playing catch-up. 


The Four P's of Pragmatically Scaling Your Engineering Organization

Your people aren’t just the heart and soul of the company, they’re the building blocks for its future. When you're growing rapidly it can be tempting to add developers to your team as quickly as possible, but it's important to first consider your company goals while remaining practical about how you’re scaling. This is the key foundation for building the right organization. ... Scaling your processes comes down to practical prioritization. It is crucial to clearly establish processes that balance both short- and long-term wins for the company, beginning with the systems that need to be fixed immediately. Start by instituting a planning process looking at things from both an annual perspective and quarterly, or even monthly– and try not to get bogged down deliberating over a planning methodology in the first stage. ... Scaling the platform is often the biggest challenge organizations face in the hyper-growth phase. But it’s important to remember that building toward a north star doesn’t mean that you’re building the north star. Now is the time to focus on intentional, iterative improvement of the platform rather than implementing sweeping changes to your product.



Quote for the day:

"It is one thing to rouse the passion of a people, and quite another to lead them." -- Ron Suskind

Daily Tech Digest - July 17, 2022

The Shared Responsibility of Taming Cloud Costs

The cost of cloud impacts the bottom line and therefore, cloud cost management cannot be the job of the CIO alone. It’s important to create a culture or framework where managing cloud costs is a shared responsibility among business, product, and engineering teams, and where it’s a consideration throughout the software development process and in IT operations. In order to do just this, it’s important to shift education left. Like many DevOps principles, “shift-left” once had a specific meaning that has become more generalized over time. At its core, the idea of shifting left is to be proactive when it comes to cost management in all management and operational processes. It means empowering developers and making operational considerations a key part of application development. Change management must be connected in the context of cost. If organizations educate and empower developers to understand the impact of cloud cost as software is written, they will reap the benefits of building more cost effective software that improves operational visibility and control.


How AI Regulations Are Shaping Its Current And Future Use

Examining some of the many laws that have been passed in relation to AI, I have identified some of the best practices for both statewide and nationwide regulation. On a national level, it is crucial to both develop public trust in AI as well as have advisory boards to monitor the use of AI. One such example is having specific research teams or committees dedicated to identifying and studying deepfakes. In the U.S., Texas and California have legally banned the use of deepfakes to influence elections, and the EU created a self-regulating Code of Practice on Disinformation for all online platforms to achieve similar results. Another necessity is to have an ethics committee that monitors and advises the use of AI in digitization activities, a practice currently in place in Belgium (pg. 179). Specifically, this committee encourages companies that use AI to weigh the costs and benefits of implementation compared to the systems that will get replaced. Finally, it’s important to promote public trust in AI on a national level.


5 key considerations for your 2023 cybersecurity budget planning

The cost of complying with various privacy regulations and security obligations in contracts is going up, Patel says. “Some contracts might require independent testing by third-party auditors. Auditors and consultants are also raising fees due to inflation and rising salaries,” he says. ... “When an organization is truly secure, the cost to achieve and maintain compliance should be reduced,” he says. Evolving regulatory compliance requirements, especially for those organizations supporting critical infrastructure, require significant support, Chaddock says. “Even the effort to determine what needs to happen can be costly and detract from daily operations, so plan for increased effort to support regulatory obligations if applicable,” he says. ... If paying for such policies comes out of the security budget, CISOs will need to take into consideration the rising costs of coverage and other factors. Companies should be sure to include the cost of cyber insurance over time, and more important the costs associated with maintaining effective and secure backup/restore capabilities, Chaddock says.


CISA pulls the fire alarm on Juniper Networks bugs

The networking and security company also issued an alert about critical vulnerabilities in Junos Space Security Director Policy Enforcer — this piece provides centralized threat management and monitoring for software-defined networks — but noted that it's not aware of any malicious exploitation of these critical bugs. While the vendor didn't provide details about the Policy Enforcer bugs, they received a 9.8 CVSS score, and there are "multiple" vulnerabilities in this product, according to the security bulletin. The flaws affect all versions of Junos Space Policy Enforcer prior to 22.1R1, and Juniper said it has fixed the issues. The next group of critical vulnerabilities exist in third-party software used in the Contrail Networking product. In this security bulletin, Juniper issued updates to address more than 100 CVEs that go back to 2013. Upgrading to release 21.4.0 fixes the Open Container Initiative-compliant Red Hat Universal Base Image container image from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8, the vendor explained in the alert.


HTTP/3 Is Now a Standard: Why Use It and How to Get Started

As you move from one mast to another, from behind walls that block or bounce signals, connections are commonly cut and restarted. This is not what TCP likes — it doesn’t really want to communicate without formal introductions and a good firm handshake. In fact, it turns out that TCP’s strict accounting and waiting for that last stray packet just means that users have to wait around for webpages to load and new apps to download, or a connection timeout to be re-established. So to take advantage of the informality of UDP, and to allow the network to use some smart stuff on-the-fly, the new QUIC (Quick UDP Internet Connections) format got more attention. While we don’t want to see too much intelligence within the network itself, we are much more comfortable these days with automatic decision making. QUIC understands that a site is made up of multiple files, and it won’t blight the entire connection just because one file hasn’t finished loading. The other trend that QUIC follows up on is built-in security. Whereas encryption was optional before (i.e. HTTP or HTTPS) QUIC is always encrypted.


The enemy of vulnerability management? Unrealistic expectations

First and most importantly, you need to be realistic. Many organizations want critical vulnerabilities fixed within seven days. That is not realistic if you only have one maintenance window per month. Additionally, if you do not have the ability to reboot all your systems every weekend, you are setting yourself up for failure. If you only have one maintenance window per month, there is no reason to set a due date on critical vulnerabilities any less than 30 days. For obvious reasons, organizations are nervous about speaking publicly about how quickly they remediate vulnerabilities. One estimate states that the mean time to remediate for private sector organizations is between 60 and 150 days. You can get into that range by setting due dates of 30, 60, 90, and 180 days for severities of critical, high, medium, and low, respectively. Better yet, this is achievable with a single maintenance window each month. As someone who has worked on both sides of this problem, getting it fixed eventually is more important than taking a hard line on getting it fixed lightning fast, and then having it sit there partially fixed indefinitely. Setting an aggressive policy that your team cannot deliver on looks tough.


‘Callback’ Phishing Campaign Impersonates Security Firms

Researchers likened the campaign to one discovered last year dubbed BazarCall by the Wizard Spider threat group. That campaign used a similar tactic to try to spur people to make a phone call to opt-out of renewing an online service the recipient purportedly is currently using, Sophos researchers explained at the time. If people made the call, a friendly person on the other side would give them a website address where the soon-to-be victim could supposedly unsubscribe from the service. However, that website instead led them to a malicious download. ... Researchers did not specify what other security companies were being impersonated in the campaign, which they identified on July 8, they said. In their blog post, they included a screenshot of the email sent to recipients impersonating CrowdStrike, which appears legitimate by using the company’s logo. Specifically, the email informs the target that it’s coming from their company’s “outsourced data security services vendor,” and that “abnormal activity” has been detected on the “segment of the network which your workstation is a part of.”


The next frontier in cloud computing

Terms that are beginning to emerge, such as “supercloud,” “distributed cloud,” “metacloud” (my vote), and “abstract cloud.” Even the term “cloud native” is up for debate. To be fair to the buzzword makers, they all define the concept a bit differently, and I know the wrath of defining a buzzword a bit differently than others do. The common pattern seems to be a collection of public clouds and sometimes edge-based systems that work together for some greater purpose. The metacloud concept will be the single focus for the next 5 to 10 years as we begin to put public clouds to work. Having a collection of cloud services managed with abstraction and automation is much more valuable than attempting to leverage each public cloud provider on its terms rather than yours. We want to leverage public cloud providers through abstract interfaces to access specific services, such as storage, compute, artificial intelligence, data, etc., and we want to support a layer of cloud-spanning technology that allows us to use those services more effectively. A metacloud removes the complexity that multicloud brings these days.


A CIO’s guide to guiding business change

When it comes to supporting business change, the “it depends answer” amounts to choosing the most suitable methodology, not the methodology the business analyst has the darkest belt in. But on the other hand, the idea of having to earn belts of varying hue or their equivalent levels of expertise in several of these methodologies, just so you can choose the one that best fits a situation, might strike you as too intimidating to bother with. Picking one to use in all situations, and living with its limitations, is understandably tempting. If adding to your belt collection isn’t high on your priority list, here’s what you need to know to limit your hold-your-pants-up apparel to suspenders, leaving the black belts to specialists you bring in for the job once you’ve decided which methodology fits your situation best. Before you can be in a position to choose, keep in mind the six dimensions of process optimization: Fixed cost, incremental cost, cycle time, throughput, quality, and excellence. You need to keep these center stage, because: You can only optimize around no more than three of them; the ones you choose have tradeoffs; and each methodology is designed to optimize different process dimensions.


7 Reasons to Choose Apache Pulsar over Apache Kafka

Apache Pulsar is like two products in one. Not only can it handle high-rate, real-time use cases like Kafka, but it also supports standard message queuing patterns, such as competing consumers, fail-over subscriptions, and easy message fan out. Apache Pulsar automatically keeps track of the client's read position in the topic and stores that information in its high-performance distributed ledger, Apache BookKeeper. Unlike Kafka, Apache Pulsar can handle many of the use cases of a traditional queuing system, like RabbitMQ. So instead of running two systems — one for real-time streaming and one for queuing — you do both with Pulsar. It’s a two-for-one deal, and those are always good. ... Well, with Apache Pulsar it can be that simple. If you just need a topic, then use a topic. You don’t have to specify the number of partitions or think about how many consumers the topic might have. Pulsar subscriptions allow you to add as many consumers as you want on a topic with Pulsar keeping track of it all.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - July 15, 2022

Large-Scale Phishing Campaign Bypasses MFA

In the phishing campaign observed by Microsoft researchers, attackers initiate contact with potential victims by sending emails with an HTML file attachment to multiple recipients in different organizations. The messages claim that the recipients have a voicemail message and need to click on the attachment to access it or it will be deleted in 24 hours. If a user clicks on the link, they are redirected to a site that tells them they will be redirected again to their mailbox with the audio in an hour. Meanwhile, they are asked to sign in with their credentials. At this point, however, the attack does something unique using clever coding by automatically filling in the phishing landing page with the user’s email address, “thus enhancing its social engineering lure,” researchers noted. If a target enters his or her credentials and gets authenticated, he or she is redirected to the legitimate Microsoft office.com page. However, in the background, the attacker intercepts the credentials and gets authenticated on the user’s behalf, providing free reign to perform follow-on activities, researchers said.


Mergers and acquisitions put zero trust to the ultimate test

Zero trust is getting a hard look by enterprises that are pushing more workloads into the cloud and edge amid more employees working remotely, all of which are beyond the boundaries datacenter security. The architecture assumes that no user, device, or application on the network can be trusted. Instead, a zero-trust framework relies on identity, behavior, authentication, and policies to verify and validate everything on the network and to determine such issues as access and privileges. ... "When a company [buys another], they have to identify which applications of the acquired company they should keep and which they should eliminate," he said. "Then, for a period of time, the acquired company will only give them limited access to applications in the acquiring company and vice-versa. To do so, traditionally they have to bring the two corporate networks together. When they integrate corporate networks, it creates problems. "Each site has the same IP address name. They call them 'overlapping IP addresses.' Now they have to rename and create the stuff. It takes time, money and effort."


8 servant leadership do’s and don’ts

Being a servant leader doesn’t mean giving up control or “letting people do whatever they want,” Dotlich says. “I don’t think it means that you do whatever [employees] ask either, which is how we normally think of ‘servants.’ But it is really facilitating people’s performance, goals, achievements, and aspirations. In that way you’re serving who they want to be or what they want to achieve.” ... During periods of high pressure, “sometimes we as leaders want to keep pushing forward but that’s exactly the wrong thing to do,” Reis says. “Sometimes it’s just better to take a minute, reframe, and then re-engage.” Leaders can also show empathy with feedback, he says. “It would be easy to hear a list of complaints and for defensiveness to set in,” Reis says. “But the empathy is in understanding that the issues being raised are part of the teammates’ sincere desire to make things better. You’re empathizing with that frustration and really hearing that,” he says. ... It’s important for each organization to define servant leadership “in a way that works in your own system, that people understand and that is not misleading,” Dotlich says.


Researchers trained an AI model to ‘think’ like a baby, and it suddenly excelled

Typically, AI models start with a blank slate and are trained on data with many different examples, from which the model constructs knowledge. But research on infants suggests this is not what babies do. Instead of building knowledge from scratch, infants start with some principled expectations about objects. For instance, they expect if they attend to an object that is then hidden behind another object, the first object will continue to exist. This is a core assumption that starts them off in the right direction. Their knowledge then becomes more refined with time and experience. The exciting finding by Piloto and colleagues is that a deep-learning AI system modelled on what babies do, outperforms a system that begins with a blank slate and tries to learn based on experience alone. ... If you show an infant a magic trick where you violate this expectation, they can detect the magic. They reveal this knowledge by looking significantly longer at events with unexpected, or “magic” outcomes, compared to events where the outcomes are expected.


12 Ways to Improve Your Monolith Before Transitioning to Microservices

A rewrite is never an easy journey, but by moving from monolith to microservices, you are changing more than the way you code; you are changing the company’s operating model. Not only do you have to learn a new, more complex tech stack but management will also need to adjust the work culture and reorganize people into smaller, cross-functional teams. How to best reorganize the teams and the company are subjects worthy of a separate post. In this article, I want to focus on the technical aspects of the migration. First, it’s important to research as much as possible about the tradeoffs involved in adopting microservices before even getting started. You want to be absolutely sure that microservices (and not other alternative solutions such as modularized monoliths) are the right solution for you. ... During development, you’ll not only be constantly shipping out new microservices but also re-deploying the monolith. The faster and more painless this process is, the more rapidly you can progress. Set up continuous integration and delivery (CI/CD) to test and deploy code automatically.


A Data Professional without Business Acumen Is Like a Sword without a Handle

In my journey to become an impactful data professional, I’ve found three statements to be an excellent pivot:Identify what you love doing in your career, and more importantly, what you do not. It is okay to feel overwhelmed by the depth data science and analytics has to offer. Start small with the basics, and build your way up to complex projects at your own pace. Read what people are working on. That can inspire you, set expectations, and introduce you to the latest and greatest in the data community. Take time to create your value proposition as a data person and work to be the subject matter expert for a niche. Be the pacesetter of goals for people to turn to you for knowledge, advice, or to get stuff done. Also, a data professional without business acumen is like a sword without a handle. The ability to translate business problems into data and connect it back to business impact is compelling and much appreciated in today’s world. If all of these still don’t connect with you, there are plenty of other roles in data beyond data scientists and analysts! There’s a lot in store for a technology enthusiast today.


Making sense of data with low-code environments

A serious low-code environment provides data scientists flexibility around the tools they use. At the same time, it allows focus on the interesting parts of their job, while abstracting away from tool interfacing and different versions of involved libraries. A good environment lets data scientists reach out to code if they want to, but ensures they do not have to touch code every time they want to control the interna of an algorithm. Essentially, this allows visual programming of a data flow process — data science done for real is complex, after all. If done right, the low-code environment continues to allow access to new technologies, making it future proof for ongoing innovations in the field. But the best low-code environments also ensure backward compatibility and include a mechanism to easily package and deploy trained models together with all the necessary steps for data transformations into production. ... The business people often complain that the data folks work slowly, don’t quite understand the real problem and, at the end of it all, don’t quite arrive at the answer the business side was looking for. 


Technology is providing the resilience that businesses need at uncertain times

From the blockchain to the Metaverse to emotional AI, digital technologies are rapidly advancing at a time when enterprises face more pressure than ever to innovate to gain a competitive advantage. . How can companies apply human-centric technologies to transform the future of their business? Radically Human, a new book from Accenture Technology leaders Paul Daugherty and H. James Wilson, offers business leaders an easy-to-understand breakdown of today's most advanced human-inspired technologies and an actionable IDEAS framework that will help you approach innovation in a completely new way. In Radically Human, Daugherty and Wilson show this profound shift, fast-forwarded by the pandemic, toward more human -- and more humane -- technology. The book introduces us to a new innovation framework and the basic building blocks of business -- Intelligence, Data, Expertise, Architecture, and Strategy (IDEAS) -- that are transforming competition. Daugherty also highlights the three stages of human-machine interactions.


Low-code development becoming business skill ‘table stakes’

Cloud computing software provider ServiceNow said that more than 80% of its customer base now uses its low-code solution, App Engine. And App Engine’s active developer base grows by 47% every month, the company said. Marcus Torres, general manager of the App Engine Business at ServiceNow, said the ability to create business applications with low-code and no-code tools is becoming an expected skill set for businesses. Much of that is because the business side of the house understands the application needs of a company better than an IT shop. The millennials and younger workers that make up the majority of today’s workforce are far more comfortable with technology, including software development, than older workers. “They understand there is an app that provides some utility for them,” Torres said. “With these [low-code] platforms, people typically try it out, get some initial success, and then try to do more.” Torres has seen groups ranging from facilities teams to human resources departments develop applications, with the development work done by people who typically don’t have technology pedigree.


Why tech professionals are leaving IT companies for MBA

The IT experience with the business training provides a big picture of the direction of the tech firm, from the view point of clients, various departments, cost, and the firm’s future. The right kind of MBA program allows hands on experience of creating products and services and working in an environment similar to tech firms. Besides the soft skills like leadership, team work, communication, etc. the hard skills – problem solving, strategic planning, data analytics – working within the frame work of the fast-evolving tech world can really increase the hiring value of MBAs with prior tech experience. Good MBA programs also expose their graduates to various hubs including tech companies. It opens up networking opportunities with peers and current leaders who are all invested in building the right kind of talent for the future. This surely beats being stuck in a dead-end software job role with little learning and development. Good MBA programs also increase the value of its grads, with better salary opportunities than with pre-MBA experience. 



Quote for the day:

"A leader or a man of action in a crisis almost always acts subconsciously and then thinks of the reasons for his action." -- Jawaharlal Nehru

Daily Tech Digest - July 11, 2022

What Do Authentication & Authorization Mean In Zero Trust?

Authorization depends on authentication. It makes no sense to authorize a user if you do not have any mechanism in place to make sure the person or service is exactly what, or who, they say they are. Most organizations have some mechanism in place to handle authentication, and many have role-based access controls (RBAC) that group users by role, and grant or deny access based on those roles. In a zero trust system, however, both authentication and authorization are much more granular. To return to the castle analogy we explored previously, before zero trust the network would be considered a castle, and inside the castle there would be many different types of assets. In most organizations, human users would be authenticated individually — have to prove not only that they belong to a particular role, but that they are exactly the person they say they are. Service users can often also be granularly authenticated. In a RBAC system, however, each user is granted or denied access on a group basis — all the human users in the “admin” category would get blanket access, for example.


As hiring freezes and layoffs hit, is the bubble about to burst for tech workers?

Until now, the tech industry has largely sailed through the economic turbulence that has impacted other industries. Remote working and an urgency to put everything on the cloud or in an app – significantly accelerated by the pandemic – has created fierce demand for those who can create, migrate, and secure software. However, tech leaders are bracing for tough times ahead. According to recent data by CW Jobs, 85% of IT decision makers expect their organization to be impacted by the cost of doing business – including hiring freezes (21%) and pay freezes (20%). We're already seeing this play out, with Tesla, Uber and Netflix amongst the big names to have announced hiring freezes or layoffs in recent weeks. Meanwhile, Microsoft, Coinbase and Meta have all put dampeners on recruiting. If tech workers are concerned about this ongoing tightening of belts, they aren't showing it: the same CW Jobs report found that tech professionals remain confident enough in the industry that 57% expect a pay rise in the next year. Hiring freezes and layoffs don't seem to have had much impact on worker mobility, either: just 24% of professionals surveyed by CW Jobs say they plan to stay in their current role for the next 12 months. 


ERP Modernization: How Devs Can Help Companies Innovate

Many of these ERP-based companies are facing pressure to update to more modern, cloud-based versions of their ERP platforms. But they must run a gauntlet to modernize their legacy applications. In a sense, companies that maintain these complex ERP-based systems find the environments are like “golden handcuffs.” They have become so complicated over time that they restrain IT departments’ innovation efforts, hindering their ability to create supply chain resiliency when it is most needed. To make matters more difficult, the current market is facing a global shortage of human resources required to get the job of digital transformation and application modernization done, including skilled ERP developers—especially those skilled in more antiquated languages like ABAP. Incoming developer talent is often trained in more contemporary languages like Java, Steampunk and Python. These graduates have their pick of opportunities and gravitate to companies that already work in these newer programming environments. ERP migrations can be hampered by complex, customized systems developed by high-priced, silo-skilled programmers. 


Believe it or not, metaverse land can be scarce after all

As we see, technological constraints and business logic dictate the fundamentals of digital realms and the activities these realms can host. The digital world may be endless, but the processing capabilities and memory on its backend servers are not. There is only so much digital space you can host and process without your server stack catching fire, and there is only so much creative leeway you can have within these ramifications while still keeping the business afloat. These frameworks create a system of coordinates informing the way its users and investors interpret value — and in the process, they create scarcity, too. While a lot of the valuation and scarcity mechanisms come from the intrinsic features of a specific metaverse as defined by its code, the real-world considerations have just as much, if not more, weight in that. And the metaverse proliferation will hardly change them or water the scarcity down. ... So, even if they are not too impressive, they will likely be hard to beat for most newer metaverse projects, which, again, takes a toll on the value of their land. By the same account, if you have one AAA metaverse and 10 projects with zero users, investors would go for the AAA one and its lands, as scarce as they may be.


Building Neural Networks With TensorFlow.NET

TensorFlow.NET is a library that provides a .NET Standard binding for TensorFlow. It allows .NET developers to design, train and implement machine learning algorithms, including neural networks. Tensorflow.NET also allows us to leverage various machine learning models and access the programming resources offered by TensorFlow. TensorFlow is an open-source framework developed by Google scientists and engineers for numerical computing. It is composed by a set of tools for designing, training and fine-tuning neural networks.TensorFlow's flexible architecture makes it possible to deploy calculations on one or more processors (CPUs) or graphics cards (GPUs) on a personal computer, server, without re-writing code. Keras is another open-source library for creating neural networks. It uses TensorFlow or Theano as a backend where operations are performed. Keras aims to simplify the use of these two frameworks, where algorithms are executed and results are returned to us. We will also use Keras in our example below.


4 examples of successful IT leadership

IT leaders are responsible for implementing technology and data infrastructure across an organization. This can include CIOs, CTOs, and increasingly, CDOs (Chief Data Officers). To do this effectively, IT teams need employee buy-in, illustrating clearly how new technology tools and project management can benefit the company’s mission and goals. To achieve the full support of the employee base, IT teams must explain the implementation process and expected timeline. While data platforms and cloud infrastructure are important, the table stakes are tools that allow for internal communication and collaboration. Many IT teams are leveraging business process management platforms (BPMs), which help enable better collaboration between remote and in-office teams, offering a shared view of projects. These platforms allow for greater visibility and communication across organizations while reducing meeting time and improving workflow efficiencies. Technology has the potential to increase productivity, provide greater visibility of projects for employees and managers, and automate tasks that are repetitive and time-consuming.


Why 5G is the heart of Industry 4.0

The Internet of Things (IoT) is an integral part of the connected economy. Many manufacturers are already using IoT solutions to track assets in their factories, consolidating their control rooms and increasing their analytics functionality through the installation of predictive maintenance systems. Of course, without the ability to connect these devices, Industry 4.0 will, naturally, languish. While low power wide area networks (LPWAN) are sufficient for some connected devices such as smart meters that only transmit very small quantities of data, in manufacturing the opposite is true of IoT deployment, where numerous data-intensive machines are often used within close proximity. This is why 5G connectivity is key to Industry 4.0. In a market reliant on data-intensive machine applications, such as manufacturing, the higher speeds and low latency of 5G is required for effective use of automatic robots, wearables and VR headsets, shaping the future of smart factories. And while some connected devices utilised 4G networks using unlicensed spectrum, 5G allow this to take place on an unprecedented scale. 


How to Handle Authorization in a Service Mesh

A service mesh addresses the challenges of service communication in a large-scale application. It adds an infrastructure layer that handles service discovery, load balancing and secure communication for the microservices. Commonly, a service mesh complements each microservice with an extra component — a proxy often referred to as a sidecar or data plane. The proxy intercepts all traffic from and to its accompanied service. It typically uses MutualTLS, an encrypted connection with client authentication, to communicate with other proxies in the service mesh. This way, all traffic between the services is encrypted and authenticated without updating the application. Only services that are part of the service mesh can participate in the communication, which is a security improvement. In addition, the service mesh management features allow you to configure the proxy and enforce policies such as allowing or denying particular connections, further improving security. To implement a Zero Trust architecture, you must consider several layers of security. The application should not blindly trust a request even when receiving it over the encrypted wire.


DevOps nirvana is still a distant goal for many, survey suggests

"Development teams, in general, have hardly any insight into how customers benefit from their work, and few are able to discuss these benefits with the business," the authors report. "Having such insights ready at hand would improve collaboration between IT and the business. The more customer value metrics a development team tracks, the more positive that team views their working relationship with the business. Without knowing whether the intended value for the customer is being achieved or not, development teams are effectively flying blind." The LeanIX authors calculate that 53% work on a team with a 'low level' of DevOps based on maturity factors. Still, nearly 60% said that they are flexible in adapting to changing customer needs and have CI/CD pipelines set up. At the same time, less than half of engineers build, ship, or own their code or work on teams based on team topologies, indicating a lack of DevOps maturity. Fewer than 20% of respondents said that their development team was able to choose its own tech stack; 44% said they are partly able to, and 38% they are not able to at all.


Survey Shows Increased Reliance on DORA Metrics

Overall, the survey revealed just under half of the respondents (47%) said their organization had a high level of DevOps maturity, defined as having adopted three or more DevOps working methods. Those working methods are: Being flexible to changes in customer needs; having implemented a CI/CD platform; all engineers build, ship and own their own code; teams are organized around topologies and each team is free to choose its own technology stack. Of course, each individual organization will determine for itself what level of DevOps depth is required. For example, not every organization would see the need for teams to be organized around topologies or be free to choose its own technology stack. In fact, Rose said the survey made it clear that larger enterprise IT organizations tended to have a lower overall level of DevOps maturity. One reason for that is many larger organizations are still employing legacy processes to build and deploy software, noted Rose. Most developers are also further along in terms of embracing continuous integration (CI) than IT operations teams are in adopting continuous delivery (CD), added Rose.



Quote for the day:

"It is not joy that makes us grateful. It is gratitude that makes us joyful." -- David Rast