Daily Tech Digest - September 16, 2021

Zero Trust Requires Cloud Data Security with Integrated Continuous Endpoint Risk Assessment

Most of us are tired of talking about the impact of the pandemic, but it was a watershed event in remote working. Most organizations had to rapidly extend their existing enterprise apps to all their employees, remotely. And since many have already embraced the cloud and had a remote access strategy in place, typically a VPN, they simply extended what they had to all users. CEO's and COO's wanted this to happen quickly and securely, and Zero Trust was the buzzword that most understood as the right way to make this happen. So vendors all started to explain how their widget enabled Zero Trust or at least a part of it. But remember, the idea of Zero Trust was conceived way back in 2014. A lot has changed over the last seven years. Apps and data that have moved to the cloud do not adhere to corporate domain-oriented or file-based access controls. Data is structured differently or unstructured. Communication and collaboration tools have evolved. And the endpoints people use are no longer limited to corporate-issued and managed domain-joined Windows laptops.

What We Can Learn from the Top Cloud Security Breaches

Although spending on cybersecurity grew 10% during 2020, this increase fell far short of accelerated investments in business continuity, workforce productivity and collaboration platforms. Meanwhile, spending on cloud infrastructure services was 33% higher than the previous year, spending on cloud software services was 20% higher, and there was a 17% growth in notebook PC shipments. In short, cybersecurity spending in 2020 did not keep up with the pace of digital transformation, creating even greater gaps in organizations’ ability to effectively address the security challenges introduced by public cloud infrastructure and modern containerized applications: complex environments, fragmented stacks and borderless infrastructure, not to mention the unprecedented speed, agility and scale. See our white paper, Introduction to Cloud Security Blueprint, for a detailed discussion of cloud security challenges, with or without a pandemic. In this blog post, we look at nine of the biggest cloud breaches of 2020, where “big” is not necessarily the number of data records actually compromised but rather the scope of the exposure and potential vulnerability.

When is AI actually AI? Exploring the true definition of artificial intelligence

Whatever the organisation, consumers insist on seeing instant results – with personalisation being ever more important. If this isn’t happening, businesses will start seeing ‘drop off’ as customers seek an alternative, which, in today’s competitive market, could prove disastrous. There is an opportunity now for businesses to combat this by implementing true, bespoke AI models that can sift through vast amounts of data and make its own intelligent decisions. After all, the amount of data being generated across the globe is skyrocketing, and organisations are continuing to share their data with one another – so organisation and analysis at this level is a must. However, it’s important to note that AI isn’t for everyone. The move to AI is a huge leap, so businesses must consider whether they actually need AI to achieve their goals. In some cases, investing in advanced analytics and insights is sufficient to help a business run, grow and create value. So, if advanced analytics does the job, why invest in AI? Most AI projects fail because there is no real adoption after the initial proof of concept. 

How DevOps teams are using—and abusing—DORA metrics

DORA stands for DevOps Research and Assessment, an information technology and services firm founded founded by Gene Kim and Nicole Forsgren. In Accelerate, Nicole, Gene and Jez Humble collected and summarized the outcomes many of us have seen when moving to a continuous flow of value delivery. They also discussed the behaviors and culture that successful organizations use and provide guidance on what to measure and why. ... Related to this is the idea of using DORA metrics to compare delivery performance between teams. Every team has its own context. The product is different with different delivery environments and different problem spaces. You can track team improvement and, if you have a generative culture, show teams how they are improving compared to one another, but stack-ranking teams will have a negative effect on customer and business value. Where the intent of the metrics is to manage performance rather than track the health of the entire system of delivery, the metrics push us down the path toward becoming feature factories.

Intel AI Team Proposes A Novel Machine Learning (ML) Technique, MERL

What is unique about their design is that it allows all learners to contribute to and draw from a single buffer at the same time. Each learner had access to everyone else’s experiences, which aided its own exploration and made it significantly more efficient at its own task. The second group of agents, dubbed actors, was tasked with combining all of the little movements in order to achieve the broader goal of prolonged walking. Since these agents were rarely close enough to register a reward, the team used a genetic algorithm, a technique that simulates biological evolution through natural selection. Genetic algorithms start with possible solutions to a problem and utilize a fitness function to develop the best answer over time. They created a set of actors for each “generation,” each with a unique method for completing the walking job. They then graded them according to their performance, keeping the best and discarding the others. The following generation of actors was the survivors’ “offspring,” inheriting their policies.

Backend For Frontend Authentication Pattern with Auth0 and ASP.NET Core

The Backend For Frontend (a.k.a BFF) pattern for authentication emerged to mitigate any risk that may occur from negotiating and handling access tokens from public clients running in a browser. The name also implies that a dedicated backend must be available for performing all the authorization code exchange and handling of the access and refresh tokens. This pattern relies on OpenID Connect, which is an authentication layer that runs on top of OAuth to request and receive identity information about authenticated users. ... Visual Studio ships with three templates for SPAs with an ASP.NET Core backend. As shown in the following picture, those templates are ASP.NET Core with Angular, ASP.NET Core with React.js, and ASP.NET Core with React.js and Redux, which includes all the necessary plumbing for using Redux. ... The authentication middleware parses the JWT access token and converts each attribute in the token as a claim attached to the current user in context. Our policy handler uses the claim associated with the scope for checking that the expected scope is there

REvil/Sodinokibi Ransomware Universal Decryptor Key Is Out

While Bitdefender isn’t able to share details about the key, given the fact that the firm mentioned a “trusted law enforcement partner,” Boguslavskiy conjectured that Bitdefender likely “conducted an advanced operation on REvil’s core servers and infrastructures with or for European law enforcement and was somehow able to reconstruct or obtain the master key.” Using the key in a decryptor will unlock any victim, he said, “unless REvil redesigned their entire malware set.” But even if the reborn REvil did redesign the original malware set, the key will still be able to unlock victims that were attacked prior to July 13, Boguslavskiy said. Advanced Intel monitors the top actors across all underground discussions, including on XSS, ​​a Russian-language forum created to share knowledge about exploits, vulnerabilities, malware and network penetration. So far, the intelligence firm hasn’t spotted any substantive discussion about the universal key on these underground forums. Boguslavskiy did note, however, that the administrator of XSS has been trying to shut down discussion threads, since they “don’t see any use in the gossip.”

What to expect from SASE certifications

Secure access service edge (SASE) is a network architecture that rolls SD-WAN and security into a single, centrally managed cloud service that promises simplified WAN deployment, improved security, and better performance. According to Gartner, SASE’s benefits are transformational because it can speed deployment time for new users, locations, applications and devices as well as reduce attack surfaces and shorten remediation times by as much as 95%. ... The level one certification has twelve sections, and it takes about a day to complete. Level two has five stages, takes about half a day, and requires that applicants first complete level one. The training and testing are delivered on the Credly platform. “It integrates with LinkedIn, so it’s automatically shared on your LinkedIn profile,” Webber-Zvik says. As of Sept. 1, more than 1,000 people have earned level one certification, and they represent multiple levels of professional experience and job categories. Half are current Cato customers, and some of the rest may be considering going with Cato, says Dave Greenfield, Cato’s director of technology evangelism.

The difference between physical and behavioural biometrics, and which you should be using

The debate around digital identity has never been more important. The COVID-19 pandemic pushed us almost entirely online, with many businesses pivoting to become e-tailers almost overnight. Our reliance on online services – whether ordering a new bank card, getting your groceries delivered, or talking to friends – has given bad actors the perfect hunting ground. With the advent of the internet, the world moved online. However, authentication processes from the physical world were digitised rather than re-designed for the digital world. The processes businesses digitised lack security, are cumbersome and don’t preserve privacy. For example, the password: it is now 60 years old, yet still relied on today to protect our identities and data. Digitised processes have enabled the rise in online fraud, scams, social engineering, and synthetic identities. Our own research highlighted how a quarter of consumers globally receive more scam text messages than they get from friends and families, with over half (54%) of UK consumers stating that they trust organisations less after receiving a scam message.

Resetting a Struggling Scrum Team Using Sprint 0

It is hard to determine in Sprint 0 if you are done. There is a balance to strike between performing enough upfront planning and agreement to provide clarity and comfort, and taking significant time away from delivery to plan for every eventuality that could appear in the sprints that follow Sprint 0. After running these sessions, we entered our first delivery sprint in the hopes that the agreed ways of working would help us eliminate any challenges we found together. However, we encountered a few rocks that we had to navigate around on our path to quieter seas. One early issue that surfaced was that of the level of bonding within the team. Despite the new team members settling in well, and communication channels being agreed upon to help Robin and the others collaborate, it became clear that the developer group needed to build trust to work effectively. Silence was a big part of many planning and refinement ceremonies. This was not a team of strong extroverts, and I had concerns that the team was not comfortable speaking up.

Quote for the day:

"Leadership is the art of influencing people to execute your strategic thinking" -- Nabil Khalil Basma

Daily Tech Digest - September 15, 2021

Understanding the journey of breached customer data

It’s known that hackers often use the names of the breached organisation when marketing, selling or leaking their stolen data. So, it’s worth deploying a system that monitors for supplier names, as well as your own, on forums and ransomware sites. This includes searching for common typos and variants of these names. There are, however, some limitations to this method, as these searches could lead to lots of false positives. Security teams need to filter through the data to find matches, but this can take time. Businesses can use database identifiers to improve monitoring efficiency. These take the form of unique strings within databases, such as server names and IP addresses. Teams can then match metadata included in a data leak when searching through database dumps. Patterns within data, including account numbers, customer IDs and reference numbers, are also useful for identification. Another technique is ‘watermarking’ data by adding synthetic identities to a data set. Unique identifiers are used in your data sets or those you share in your digital supply chain so you can confirm if a breach includes data from your business or a supplier.

Top 12 Cloud Security Best Practices for 2021

In a private data center, the enterprise is solely responsible for all security issues. But in the public cloud, things are much more complicated. While the buck ultimately stops with the cloud customer, the cloud provider assumes the responsibility for some aspects of IT security. Cloud and security professionals call this a shared responsibility model. Leading IaaS and platform as a service (PaaS) vendors like Amazon Web Services (AWS) and Microsoft Azure provide documentation to their customers so all parties understand where specific responsibilities lie according to different types of deployment. The diagram below, for example, shows that application-level controls are Microsoft’s responsibility with software as a service (SaaS) models, but it is the customer’s responsibility in IaaS deployments. For PaaS models, Microsoft and its customers share the responsibility. ... To prevent hackers from getting their hands on access credentials for cloud computing tools, organizations should train all workers on how to spot cybersecurity threats and how to respond to them.

How to Deploy Disruptive Technologies with Minimal Disruption

A disruptive technology can have a particularly hard impact on end users. “Discuss change, and the human reaction to it, as part of your educational process, acknowledging that it’s hard and everyone at every level of the organization must go through it,” says Tammie Pinkston, director of organizational change management at technology research and advisory firm ISG. “We recently held a client training [program] where individuals used a sticker to show where they were on the change curve, mapping themselves each day with indicators so we could see movement.” If a disruptive technology will impact multiple departments, all parties should be involved in the rollout process. “One of the reasons it's important to assess all the different interactions and impacts is to bring in the right expertise and oversight,” Lightman says. This may, for instance, require seeking input and support from HR and security teams. “It's better to be overly cautious than to have an issue arise later when you didn't include representation from a department,” he notes. Still, despite best efforts, it remains possible to overlook some technology stakeholders.

Update on .NET Multi-platform App UI (.NET MAUI)

.NET Multi-platform App UI (.NET MAUI) makes it possible to build native client apps for Windows, macOS, iOS, and Android with a single codebase and provides the native container and controls for Blazor hybrid scenarios. .NET MAUI is a wrapper framework and development experience in Visual Studio that abstracts native UI frameworks already available – WinUI for Windows, Mac Catalyst for macOS/iPadOS, iOS, and Android. Although it’s not another native UI framework, there is still a significant amount of work to provide optimal development and runtime experiences across these devices. The .NET team has been working hard with the community in the open on its development and we are committed to its release. Unfortunately, .NET MAUI will not be ready for production with .NET 6 GA in November. We want to provide the best experience, performance, and quality on day 1 to our users and to do that, we need to slip the schedule. We are now targeting early Q2 of 2022 for .NET MAUI GA. In the meantime, we will continue to enhance Xamarin and recommend it for building production mobile apps and continue releasing monthly previews of .NET MAUI.

8 top cloud security certifications

As companies move more and more of their infrastructure to the cloud, they're forced to shift their approach to security. The security controls you need to put in place for a cloud-based infrastructure are different from those for a traditional datacenter. There are also threats specific to a cloud environment. A mistake could put your data at risk. It's no surprise that hiring managers are looking for candidates who can demonstrate their cloud security know-how—and a number of companies and organizations have come up with certifications to help candidates set themselves apart. As in many other areas of IT, these certs can help give your career a boost. "Cloud security certifications can set professionals up for long-term career success in designing, operating, and maintaining secure cloud environments for today’s enterprises," says Joe Vadakkan, senior director of services alliances at Optiv. "In addition to the process being a fun learning experience, each certification offers a unique benefit to understanding the security controls, associated risks, and dynamic needs of cloud operating models."

Juniper enables Mist to handle network-fabric management

Juniper Networks is embracing an open campus-fabric management technology supported by other major networking vendors and at the same time making it simpler to use by removing much of the manual work it can require. The company is adding Ethernet VPN-Virtual Extensible XLAN (EVPN-VXLAN) support to its Mist AI cloud-based management platform let customers streamline network operations. EVPN-VXLAN separates the underlying physical network from the virtual overlay network offering integrated Layer 2/Layer 3 connectivity as well as programmability, automation and network segmentation among other features. The open technology is offered in a variety of forms by most networking vendors including Cisco, Arista, Aruba and others. “Many of today’s campus networks leverage proprietary technologies and complicated L2/L3 architectures that weren’t designed to meet modern requirement,” wrote Jeff Aaron, vice president of Enterprise Marketing at Juniper in a blog about the announcement. 

Dow CIO: Digital transformation demands rethinking talent strategy

When it comes to investing in digital, companies have many choices. There is a lot you could do, but you need to focus on what you should do. One thing is certain: You should invest in your people if you want to be successful with your digital transformation. This is not just about the technology, but using technology to change the way employees work. My IT organization continually develops its tech skills with curricula on a variety of topics, including cloud computing, machine learning, and the entire data space from architecture to data storage and data visualization. We’re also refreshing our skills around threat identification, user experience design, and expanding our programming skills by learning different programming languages. But IT organizations also need to grow their soft skills. This includes improving employees’ business acumen, so they understand how their company works and how it makes money. This not only helps organizations identify opportunities but connects them to how the tools being implemented help drive value.

Ballerina has unique features that make it particularly worthwhile for smaller programs. Most other scripting languages that are designed for smaller programs have significant differences from Ballerina in that they are dynamically typed and they don't have the unique scalability and robustness features that Ballerina has. Problems in the pre-cloud era that you could solve with other scripting languages are still relevant problems. Except now, network services are involved; robustness is now more important than ever. With standard scripting languages, a 50-line program tends to become an unmaintainable 1000-line program a few years later, and this doesn’t scale. Ballerina can be used to solve problems addressed with scripting language programs but it's much more scalable, more robust, and more suitable for the cloud. Scripting languages also typically don't have any visual components, but Ballerina does.

Tech Nation welcomes tech companies to Net Zero 2.0 programme

For the first time, the Net Zero programme from Tech Nation is welcoming space tech companies, with operations within the space gaining momentum. Satellite imaging, for example, provides a way to observe large areas from space to rapidly identify illegal activities such as deforestation or mining; monitor supply chains; and verify nature-based solutions such as carbon offsetting. This type of technology is gaining traction rapidly as countries across the world look for innovative ways to combat climate change and as multinationals seek to achieve their recently set net zero goals. Earth Blox is using satellite data to identify deforestation or mining activities and monitor supply chains and support nature-based solutions, while Sylvera uses machine learning and satellite data to verify the carbon offsetting industry. Additionally, Satellite Vu looks to measure the thermal footprint of any building on the planet every 1-2 hours, helping to drastically increase the energy efficiency of buildings, factories and power stations globally.

Travis CI Flaw Exposed Secrets From Public Repositories

The effects of the vulnerability meant that if a public repository was forked, someone could file a pull request and then get access to the secrets attached to the original public repository, according to Travis CI's explanation. Travis CI's documentation says that secrets shouldn't be available to external pull requests, says Patrick Dwyer, an Australian software developer who works with the Open Web Application Security Project, known as OWASP. "They [Travis CI] must have introduced a bug and made those secrets available," Dwyer says. Travis CI's flaw represents a supply-chain risk for software developers and any organization using software from projects that use Travis CI, says Geoffrey Huntley, an Australian software and DevOps engineer. "For a CI provider, leaking secrets is up there with leaking the source code as one of the worst things you never want to do," Huntley says. Travis CI has issued a security bulletin, but some are criticizing the company that it's insufficient given the gravity of the vulnerability. 

Quote for the day:

"Leaders must be close enough to relate to others, but far enough ahead to motivate them." -- John C. Maxwell

Daily Tech Digest - September 14, 2021

Honing Cybersecurity Strategy When Everyone’s a Target for Ransomware

While not all hackers are out for the money, if they are, they become particularly crafty at plying their trade. What malicious actors are often looking for are the “keys to the kingdom” — the most lucrative mission-critical information, passwords, contacts or accounts — which is usually found within the C-suite. And not only do C-suite targets have the most valuable organizational data, but they are also the decision-makers of whether to pay a ransom. This creates two situations that put executives under even greater threat. First, it makes a ransomware attack on a C-suite decision maker incredibly efficient, which achieves maximum ROI for threat actors. Second, it makes a C-suite executive’s personal communications incredibly valuable and particularly vulnerable. The tighter cybercriminals can twist the screws with embarrassing business and private communications threatened for release, the greater their chances for payment – and often, the more they can demand. The sad reality is that the majority of executives, and particularly their direct reports, are incredibly soft targets.

What Do Engineers Really Think About Technical Debt?

It's no surprise that technical debt causes bugs, outages, quality issues and slows down the development process. But the impact of tech debt is far greater than that. Employee morale is one of the most difficult things to manage, especially now that companies are switching to long-term remote work solutions. Many Engineers mentioned that technical debt is actually a major driver of decreasing morale. They often feel like they are forced to prioritize new features over vital maintenance work that could improve their experience and velocity and this is taking a significant toll. ... More than half of respondents claim that their companies do not deal with technical debt well, highlighting that the divide between engineers and leadership is widening rather than closing. Engineers are clearly convinced that technical debt is the primary reason for productivity losses, however, they seem to be struggling to make it a priority. Yet, making the case for technical debt could help engineers ship up to 100% faster. As much as 66% of Engineers believe the team would ship up to 100% faster if they had a process for technical debt. 

Human-Machine Understanding: how tech helps us to be more human

Human-Machine Understanding, or HMU, is one of the lines of enquiry currently getting me out of bed in the morning, and I’m sure that it will shape a new age of empathic technology. In the not-too-distant future, we’ll be creating machines that comprehend us, humans, at a psychological level. They’ll infer our internal states – emotions, attention, personality, health and so on – to help us make useful decisions. But let’s just press pause on the future for a moment, and track how far we’ve come. Back in 2015, media headlines were screaming about the coming dystopia/utopia of artificial intelligence. On one hand, we were all doomed: humans faced the peril of extinction from robots or were at least at risk of having their jobs snatched away by machine learning bots. On the other hand, many people – me included – were looking forward to a future where machines answered their every need. We grasped the fact that intelligent automation is all about augmenting human endeavour, not replacing it.

Essential Soft Skills for IT leaders in a Remote World

People in positions of authority often aim to project unbreakable confidence, but a better path to building connections is through honesty. Foremost, being open about insecurities, uncertainties, and failures is humanizing—a critical trait in the age of Zoom. Conversely, ultra-strict managers may find their teammates become reticent to speak up about risks they see. Such an environment is an anathema to multidisciplinary IT fields, given the need for transparent workflows. Being vulnerable at work is not only about you trying to show something to your teammates, it is also about establishing and growing a safe environment for the colleagues you work with. In my experience, it’s hard for people to speak up about sensitive topics like challenges, difficult conversations or if they don’t agree with someone at work. But these aspects are much easier when the team, including leadership, has built an environment, where everyone trusts that they are free to express their opinions and share their feelings about their work.

The past, present and future of IoT in physical security

As ever, the amount of storage that higher-resolution video generates is the limiting factor, and the development of smart storage technologies such as Zipstream has helped tremendously in recent years. We will likely see further improvements in smart storage and video compression that will help make higher-resolution video possible. Cybersecurity will also be a growing concern for both manufacturers and end users. Recently, one of Sweden’s largest retailers was shut down for a week because of a hack, and others will meet the same fate if they continue to use poorly secured devices. Any piece of software can contain a bug, but only developers and manufacturers committed to identifying and fixing these potential vulnerabilities can be considered reliable partners. Governments across the globe will likely pass new regulations mandating cybersecurity improvements, with California’s recent IoT protection law serving as an early indicator of what the industry can expect. Finally, ethical behavior will continue to become more important. A growing number of companies have begun foregrounding their ethics policies, issuing guidelines for how they expect technology like facial recognition to be used — not abused.

Leading under pressure

“There is a well-accepted and common wisdom that success breeds confidence, and that confidence helps you handle pressure better,” explained Jensen. “My read, without having talked to Simone Biles or knowing exactly what is going on in her head, is that there is a countervailing force to that positive cycle, which is that as you accrue status and visibility, the ‘importance’ piece gets greatly magnified. The stakes expand. They begin to encompass your self-worth and the weight of the 330 million people you are carrying along for the ride.” Business leaders are subject to this phenomenon, too. As they reach higher levels of the corporate hierarchy, the importance of their decisions and actions grows, and the stakes rise. And like pressure itself, the element of importance is a double-edged sword. ... How do you manage importance during these peak pressure moments? The secret is to understand that how you perceive the stakes in any given situation can be controlled. “When you get into peak pressure moments, all you can think about is how important [the stakes are], what you might gain, what you might lose,” said Jensen.

IT leaders facing backlash from remote workers over cybersecurity measures: HP study

Ian Pratt, global head of security for personal systems at HP, said the fact that workers are actively circumventing security should be a worry for any CISO. "This is how breaches can be born," Pratt said. "If security is too cumbersome and weighs people down, then people will find a way around it. Instead, security should fit as much as possible into existing working patterns and flows with unobtrusive, secure-by-design and user-intuitive technology. Ultimately, we need to make it as easy to work securely as it is to work insecurely, and we can do this by building security into systems from the ground up." IT leaders have had to take certain measures to deal with recalcitrant remote workers, including updating security policies and restricting access to certain websites and applications. But these practices are causing resentment among workers, 37% of whom say the policies are "often too restrictive." The survey of IT leaders found that 90% have received pushback because of security controls, and 67% said they get weekly complaints about it.

OSI Layer 1: The soft underbelly of cybersecurity

The metadata from a switch can indicate whether a rogue device is present. This can be accomplished without mirroring traffic to respect privacy within sensitive IT environments. Supply chain exposure is more complex than managing where you order from: It’s a two-fold problem involving both software and hardware. It’s understood that many applications bundle libraries and controls from third parties that are further outside of your purview. Attackers exploit weaknesses and defects from an array of targets, including unsecured source code, outdated network protocols (downgrade attacks), unsecured third-party servers, and update mechanisms. Software safeguarding software is under your control: deploying least privilege principles, endpoint protection, and due diligence to audit and assess third party partners are essential and reasonable precautions. Hardware is another story altogether. It’s less obvious when a fully functioning Raspberry Pi has been modified or telecommunications equipment has been compromised by a state actor, as it looks and plays the part without any irregularities.

Desensitized To Devastation: Strategies For Reaching CISOs In Today’s Cyber Landscape

Hackers only need to be right once. One set of compromised credentials puts them on their way to snatching your critical assets. Security teams, on the other hand, have to be right all the time. There’s no logging off at the end of the 9-to-5 workday for criminals. They’re active when you’re awake, they’re active when you’re asleep and they’re active when you’re celebrating the holidays with your families. All it takes is one right guess of a password and a company could lose millions of dollars, customer data, its reputation and its stock price — and the CISO could lose their job. Businesses can’t afford to have weak security infrastructures that aren’t monitoring for and shutting down threats 24/7. ... Ransomware was up 93% in 2021 from 2020, according to Check Point, and we’ve recently suffered some major cyberattacks. The country has been hit with attacks that have massive implications for daily life and business, like the Colonial Pipeline and Kaseya attacks. And external threats aren’t all we have to worry about. 

Bad News: Innovative REvil Ransomware Operation Is Back

Unfortunately, with its infrastructure coming back online, REvil appears to be back. Notably, all victims listed on its data leak site have had their countdown timers reset, Bleeping Computer reports. Such timers give victims a specified period of time to begin negotiating a ransom payment, before REvil says it reserves the right to dump their stolen data online. REvil is one of a number of ransomware operations that regularly tells victims that it's stolen sensitive data, before it forcibly encrypts systems and threatens to leak the data if they don't pay. But REvil's representatives have been caught lying before, by claiming to have stolen data as they extort victims into paying, only to admit later that they never stole anything. Why might the infrastructure have come back online, including the payments portal, which accepts bitcoin and monero? Numerous experts have suggested REvil was just laying low in the wake of the Biden administration pledging to get tough. Perhaps the main operators and developers opted to relocate to a country from which it might be safer to run their business. Or maybe they were just taking a vacation.

Quote for the day:

"You have two choices, to control your mind or to let your mind control you." -- Paulo Coelho

Daily Tech Digest - September 13, 2021

4 Steps for Fostering Collaboration Between IT Network and Security Teams

Collaboration requires a single source of truth or shared data that's reliable and accessible to all involved. If one team is working with outdated information, or a different type of data entirely, it won't be on the same page as the other team. Likewise, if one team lacks specific details, such as visibility into a public cloud environment, it won't be an effective partner. Unfortunately, many enterprise-level organizations struggle with data control conflicts because individual teams can be overly protective of data they extract. As a result, what is shared is sometimes inconsistent, irrelevant, or out of date. At the same time, many network and security tools are already leveraging the same data, such as network packets, flows, and robust sets of metadata. This network-derived data, or "smart data," must support workflows without requiring management tool architects to cobble together multiple secondary data stores to prop it up. Consequently, network and security teams should find ways to unify their data collection and the tools they use for analysis wherever possible to overcome sharing issues.

A guide to sensor technology in IoT

There is still plenty of room for IoT sensor technology to grow, and further disrupt multiple industries, in the coming years. With a hybrid working model set to continue being common among businesses, the use of IoT sensors can enable employees that choose not to work on company premises to carry out tasks remotely. Meanwhile, as smart cities continue developing, IoT sensors will also remain a big part of the lives of citizens. With national infrastructures involving IoT sensors in the works around the world, businesses will be able to benefit from increased connectivity and decreased costs, while being able to cut carbon emissions as national and global sustainability targets loom. The roll-out of 5G also promises to boost the IoT space, with more and more device varieties set to be compatible with the burgeoning wireless technology. This won’t mean that LPWAN will lose its relevance, however — organisations will still find valuable uses for smaller amounts of data that may be easier to manage and transfer between devices. There is the breakthrough of standards such as LTE-M and NB-IoT to consider here, as well.

Real-time Point-of-Sale Analytics With a Data Lakehouse

Different processes generate data differently within the POS. Sales transactions are likely to leave a trail of new records appended to relevant tables. Returns may follow multiple paths triggering updates to past sales records, the insertion of new, reversing sales records and/or the insertion of new information in returns-specific structures. Vendor documentation, tribal knowledge and even some independent investigative work may be required to uncover exactly how and where event-specific information lands within the POS. Understanding these patterns can help build a data transmission strategy for specific kinds of information. Higher frequency, finer-grained, insert-oriented patterns may be ideally suited for continuous streaming. Less frequent, larger-scale events may best align with batch-oriented, bulk data styles of transmission. But if these modes of data transmission represent two ends of a spectrum, you are likely to find most events captured by the POS fall somewhere in between. The beauty of the data lakehouse approach to data architecture is that multiple modes of data transmission can be employed in parallel.

The human factor in cybersecurity

People are creatures of habit who seek out shortcuts and efficiencies. If I write a 5-step process for logging in to my most secure system, at least one person will email me explaining how they found a shortcut. And there will be many who complain about having to wait 4 seconds as their login is verified. I know this, and so I push back on my team when they establish new protocols. Can we make this easier? Can we use a tool – like multi-factor authentication or PIV cards? Can we eliminate irritating parts of cybersecurity? Yes, the solutions might cost more, but the benefit is compliance. I never want to create a system that has my people jotting weekly passwords on post-it notes. So, I ask my security team to think like a busy employee, a hurried exec, and a distracted engineer – and remove complexity from our routines. Cybersecurity measures take time to work, but human brains process faster. We might accept that implementing an excellent cyber program and maintaining cyber hygiene —like basic email scanning or link scanning—adds a layer of inefficiency; however, this is often a difficult concept for employees. 

Now Is The Time To Update Your Risk Management Strategy And Prioritize Cybersecurity

It’s clear that cybersecurity threats are real for companies of all types and sizes, and so if there is one area of risk management businesses can strengthen this year, it should be this. The good news is that many companies are doing just that. A recent OnePoll survey of 375 senior-level IT security professionals, commissioned by my company, confirms this. Respondents in our survey indicated that recent data breaches, like SolarWinds, are impacting the way their organizations prioritize cybersecurity. Nearly all respondents believe cybersecurity is considered a top business risk within their organizations, and 82% say these breaches have either greatly or somewhat impacted the way their organization prioritizes cybersecurity. The U.K.’s Department for Digital, Culture, Media and Sport commissioned another survey that underscores these findings. They found that 77% of businesses say cybersecurity is “a high priority for their directors or senior managers.” This prioritization is also turning into real investment into cybersecurity measures by businesses, which means company leaders are walking the talk.

7 Microservices Best Practices for Developers

At times, it might seem to make sense for different microservices to access data in the same database. However, a deeper examination might reveal that one microservice only works with a subset of database tables, while the other microservice only works with a completely different subset of tables. If the two subsets of data are completely orthogonal, this would be a good case for separating the database into separate services. This way, a single service depends on its dedicated data store, and that data store's failure will not impact any service besides that one. We could make an analogous case for file stores. When adopting a microservices architecture, there's no requirement for separate microservices to use the same file storage service. Unless there's an actual overlap of files, separate microservices ought to have separate file stores. With this separation of data comes an increase in flexibility. For example, let's assume we had two microservices, both sharing the same file storage service with a cloud provider. One microservice regularly touches numerous assets but is small in file size.

Why the ‘accidental hybrid’ cloud exists, and how to manage it

With many security tools designed for an on-premises world, they can lack the application-level insight needed to positively impact digital services. Businesses are therefore inevitably becoming more vulnerable to cyber attacks, especially as an ‘accidental hybrid’ environment makes it challenging to accurately monitor traffic or detect potential threats. If a SecOps team has a ‘clouded’ vision into the cloud environment, they may be forced to rely only on trace files or application logs that ultimately provide a less than perfect view into the network. What’s more, with the pervasive issue of the digital skills shortage, there are a significant lack of experts that truly understand how to secure the hybrid cloud environment. As long as a visibility strategy is prioritised, network automation becomes an invaluable solution to overcoming the issues of overstretched security professionals and the increasing ‘threatscape’. While it may have seemed a daunting process in the past, automation of data analysis is now surprisingly simple and can be integral for gaining better insight and, in turn, mitigating attacks.

How Quantifying Information Leakage Helps to Protect Systems

The first and most important step is to identify the high value secrets that your system is protecting. Not all assets need the same degree of protection. The next step is to identify observable information that could be correlated to your secret. Try to be as comprehensive as possible, considering time, electrical output, cache states, and error messages. Once you have identified what an attacker could observe, a good preventative measure is to disassociate this observable information from your sensitive information. For example, if you notice that a program processing some sensitive information takes longer with one input than another, you can take steps to standardize the processing time. You do not want to give an attacker any hints. Next, I suggest threat modeling. Identify the goals, abilities, and rewards of possible attackers. Establishing what your adversary considers "success" could inform your system design. Finally, depending on your resources, you can approximate the distribution of your secrets. 

How to explain DevSecOps in plain English

DevSecOps extends the same basic principle to security: It shouldn’t be the sole responsibility of a group of analysts huddled in a Security Operations Center (SOC) or a testing team that doesn’t get to touch the code until just before it gets deployed. That was the dominant model in the software delivery pipelines of old: Security was a final step, rather than something considered at every step. And that used to be at least passable, for the most part. As Red Hat's DevSecOps primer notes, “That wasn’t as problematic when development cycles lasted months or even years, but those days are over.” Those days are most definitely over. That final-stage model simply didn’t account for cloud, containers, Kubernetes, and a wealth of other modern technologies. And regardless of a particular organization’s technology stack or development processes, virtually every team is expected to ship faster and more frequently than in the past. At its core, the role of security is quite simple: Most systems are built by people, and people make mistakes. 

End your meeting with clear decisions and shared commitment

In many cases, participants do the difficult, creative work of diagnosing issues, analyzing problems, and brainstorming new ideas but don’t reap the fruits of their labor because they fail to translate insights into action. Or, with the end of the meeting looming—and team members needing to get to their next meeting, pick up kids from school, catch a train, and so on—leaders rush to devise a plan. They press people into commitments they have not had time to think through—and then can’t (or won’t) keep to. Either of these mistakes can result in an endless cycle of meetings without solutions, leaving people feeling frustrated and cynical. Here are four strategies that can help leaders avoid these detrimental outcomes, and instead foster a sense of clarity and purpose. ... The key to this strategy: to prepare for an effective close, leaders should “cue” the group to start narrowing the options, ideas, or solutions on the table, whether it means going from ten job candidates to three or selecting the top few messages pitched for a new brand campaign. The timing for this cue varies based on the desired meeting outcomes, but it is usually best to start narrowing about halfway through the allotted time.

Quote for the day:

People seldom improve when they have no other model but themselves. -- Oliver Goldsmith

Daily Tech Digest - September 12, 2021

How to develop a two-tiered security model for the hybrid work paradigm

Providing organizations and their stakeholders complete digital security is a part of the holistic security culture that enterprises must inculcate. This is how they can ensure that the work paradigm of the future is anchored by safety and technological progression on the back of a top-down security culture. Organizations must promote the belief that upholding digital security requirements isn’t the responsibility of the security department alone. A sustainable security culture requires a collective investment from all stakeholders in the organization. A vision that treats security as a non-negotiable asset, complemented by employee sensitization and training practices, is necessary for the safekeeping of valuable data and prevention against exploitation of vulnerabilities by threat actors. To drive optimal results, administrators must make sure that the mechanics used to deliver security training to employees account for different departments, learning styles, and abilities. Employees are the bedrock of any organization. Employee errors are common when they are unsupervised, anxious, or uneducated in matters pertaining to organizational security. 

5 Habits I Learned From Successful Data Scientists at Microsoft

Continuous learning and improvement are paramount for Data Scientists looking to stand out from the crowd of other qualified data professionals. As many already know Data Science is not a static field. Look at job descriptions, find out what skills most employers are looking for in a data scientist, and compare with your resume. Are you lacking these skills? Identify your weak points and work towards improvement. ... It’s not just about models and programming languages; it is paramount that you understand the inner workings of your profession. The truth is if you are depending on the tricks and experience you’ve gathered from your previous or current job, there are massive tendencies that you will remain professionally stagnant. ... There are hundreds of quality research papers, books, articles, and magazines exhibiting valuable Data Science resources to educate yourself and expand your knowledge about certain concepts in your field. Before I moved on to get my Data Science certification, I learned most of the programming languages and analysis tricks from blog posts.

Yandex Pummeled by Potent Meris DDoS Botnet

“Yandex’ security team members managed to establish a clear view of the botnet’s internal structure. L2TP [Layer 2 Tunneling Protocol] tunnels are used for internetwork communications. The number of infected devices, according to the botnet internals we’ve seen, reaches 250,000,” wrote Qrator in a Thursday blog post. L2TP is a protocol used to manage virtual private networks and deliver internet services. Tunneling facilitates the transfer of data between two private networks across the public internet. Yandex and Qrato launched an investigation into the attack and believe the Mēris to be highly sophisticated. “Moreover, all those [compromised MikroTik hosts are] highly capable devices, not your typical IoT blinker connected to Wi-Fi – here we speak of a botnet consisting of, with the highest probability, devices connected through the Ethernet connection – network devices, primarily,” researchers wrote. ... While patching MikroTik devices is the most ideal mitigation to combat future Mēris attacks, researchers also recommended blacklisting.

Consistency, Coupling, and Complexity at the Edge

Although RESTful APIs are easy for backend services to call, they are not so easy for frontend applications to call. That is because an emotionally satisfying user experience is not very RESTful. Users don’t want a GUI where entities are nicely segmented. They want to see everything all at once unless progressive disclosure is called for. For example, I don’t want to navigate through multiple screens to review my travel itinerary; I want to see the summary (including flights, car rental, and hotel reservation) all on one screen before I commit to making the purchase. When a user navigates to a page on a web app or deep links into a Single Page Application (SPA) or a particular view in a mobile app, the frontend application needs to call the backend service to fetch the data needed to render the view. With RESTful APIs, it is unlikely that a single call will be able to get all the data. Typically, one call is made, then the frontend code iterates through the results of that call and makes more API calls per result item to get all the data needed.

Facebook Researcher’s New Algorithm Ushers New Paradigm Of Image Recognition

Humans have an innate capability to identify objects in the wild, even from a blurred glimpse of the thing. We do this efficiently by remembering only high-level features that get the job done (identification) and ignoring the details unless required. In the context of deep learning algorithms that do object detection, contrastive learning explored the premise of representation learning to obtain a large picture instead of doing the heavy lifting by devouring pixel-level details. But, contrastive learning has its own limitations. According to Andrew Ng, pre-training methods can suffer from three common failings: generating an identical representation for different input examples, generating dissimilar representations for examples that humans find similar (for instance, the same object viewed from two angles), and generating redundant parts of a representation. The problems of representation learning, wrote Andrew Ng, boil down to variance, invariance, and covariance issues.

How AI Is Changing the IT and AV Industries

When AI can take visual, auditory, and human speech information and generate speech in return, it will need to be able to make decisions. As an example, AI-based systems may be able to process behavioral patterns on smartphone applications and then convert that information into a decision to tweak the user experience to enhance the effectiveness of the application. Another great way for AI to make decisions and change the IT industry is to participate in defect analysis and efficiency analysis. Some AI may be able to assess protocols or infrastructure and determine where defects may exist in the system and then determine the best solutions to increase efficiency. Another consideration is for AI to collect lots of data and generate solutions to improve efficiency over time, even without the presence of a defect. AI being able to create and offer solutions is quickly changing the IT industry for the better, making it more efficient and helpful in the long term. Obviously, the introduction of AI in machines allows for automation at multiple process stages. 

DeepMind aims to marry deep learning and classic algorithms

Algorithms are a really good example of something we all use every day, Blundell noted. In fact, he added, there aren’t many algorithms out there. If you look at standard computer science textbooks, there’s maybe 50 or 60 algorithms that you learn as an undergraduate. And everything people use to connect over the internet, for example, is using just a subset of those. “There’s this very nice basis for very rich computation that we already know about, but it’s completely different from the things we’re learning. So when Petar and I started talking about this, we saw clearly there’s a nice fusion that we can make here between these two fields that has actually been unexplored so far,” Blundell said. The key thesis of NAR research is that algorithms possess fundamentally different qualities to deep learning methods. And this suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.

SolarWinds Attack Spurring Additional Federal Investigations

Right now, the SEC investigation appears fairly broad and could reveal other cyber incidents involving these companies, including past data breaches and ransomware attacks, says Austin Berglas, who formerly was an assistant special agent in charge of cyber investigations at the FBI's New York office. "This [inquiry] could potentially include forensic and investigative reports of past, unreported incidents and could bring the topic of attorney privilege into play," says Berglas, who is now global head of professional services at cybersecurity firm BlueVoyant. "If there is no evidence of [personally identifiable information] exposure, organizations are not mandated to disclose the incident. However, not all investigations are black-and-white. Sometimes evidence is destroyed, unavailable or corrupted, and confirmation of the exposure of sensitive information may not be obtainable upon forensic analysis." While some companies will err on the side of caution and publish data related to breaches, others might not, and Berglas says the SEC might be probing to see which companies are following federal or state laws when it comes to disclosures.

Implementing enterprise transformation using TOGAF

TOGAF includes the concept of "target first" and "baseline first." This can help us in our decision on where to start. If we know how we want the future state to look like, we could begin with the target first and work our way back to the baseline. If we are not sure what we want the future state to look like, we could begin with the baseline and work our way to the target state. Regardless of which path you choose; in the end you need to have both the baseline and target well defined. What we are looking for is the gap between what we have and what we need. And it is within that gap that the enterprise transformation is defined and takes place. The baseline provides us with information on our current state. The target provides us with information on what we would like to achieve at the end of the transformation. With this information, we can put together a transformation roadmap and the ability to measure our progress/success in achieving the target state. Enterprise architecture is a discipline to lead enterprise responses proactively and holistically to disruptive forces by identifying and analysing the execution of change toward desired business vision and outcomes. 

How new banking technology platforms will redefine the future of financial services

The evolution of fintech over the last five years has been quite dramatic in that they have devised new operating and business models that are changing the landscape. They are doing so by bringing in differentiated specialisation in a specific area, which traditional banks are unable to match. For example, there are a few who have created a business around becoming a ‘trusted advisor’ to consumers offering valuable guidance to them on their financial needs and enabling them to make the best choice on financial products and services. Banks which were hitherto aligned to an exclusive sourcing arrangement with a partner now have to contend with integrating seamlessly with these ‘advisors’ and participate in their competitive marketplace to acquire more customers. Not doing so is increasingly not an option, as consumer behaviour is steadily evolving to demand such experiences, and banks cannot provide these on their own. And this is truly open banking. While there are no regulatory obligations as of yet to participate in an open banking framework within India, it is a matter of time before this becomes essential in the backdrop of RBI’s account aggregator guidelines expected to come into effect soon.

Quote for the day:

"One man with courage makes a majority." -- Andrew Jackson

Daily Tech Digest - September 11, 2021

This Hardware-Level Security Solution for SSDs Can Help Prevent Ransomware Attacks

Dubbed the SSD Insider++ technology, the new security solution can be integrated into SSDs at the hardware level. So, the ransomware prevention feature will be built right into the SSD drives and will automatically detect unusual encryption activities that are not user-triggered. Now, getting into some technical details, the SSD Insider++ technology uses the inherent writing and deletion mechanisms in NAND flash to perform its task of preventing ransomware attacks. It leverages the SSD controller to continuously monitor the activity of the storage drive. The system triggers when any encryption workload is detected that is not initiated by the authorized user. In that case, the firmware prevents the SSD to take any write requests, which in turn suspends the encryption process. The system then notifies the user about abnormal encryption activities via its companion app. The app also allows users to recover any data that was encrypted before the system stopped ongoing the process.

Graph Databases VS Relational Databases – Learn How a Graph Database Works

Graph databases are a type of “Not only SQL” (NoSQL) data store. They are designed to store and retrieve data in a graph structure. The storage mechanism used can vary from database to database. Some GDBs may use more traditional database constructs, such as table-based, and then have a graph API layer on top. Others will be ‘native’ GDBs – where the whole construct of the database from storage, management and query maintains the graph structure of the data. Many of the graph databases currently available do this by treating relationships between entities as first class citizens. There are broadly two types of GDB, Resource Descriptive Framework (RDF)/triple stores/semantic graph databases, and property graph databases. An RDF GDB uses the concept of a triple, which is a statement composed of three elements: subject-predicate-object. Subject will be a resource or nodes in the graph, object will be another node or literal value, and predicate represents the relationship between subject and object. 

Microsoft Warns of Cross-Account Takeover Bug in Azure Container Instances

An attacker exploiting the weakness could execute malicious commands on other users' containers, steal customer secrets and images deployed to the platform. The Windows maker did not share any additional specifics related to the flaw, save that affected customers "revoke any privileged credentials that were deployed to the platform before August 31, 2021." Azure Container Instances is a managed service that allows users to run Docker containers directly in a serverless cloud environment, without requiring the use of virtual machines, clusters, or orchestrators. ... "This discovery highlights the need for cloud users to take a 'defense-in-depth' approach to securing their cloud infrastructure that includes continuous monitoring for threats — inside and outside the cloud platform," Unit 42 researchers Ariel Zelivanky and Yuval Avrahami said. "Discovery of Azurescape also underscores the need for cloud service providers to provide adequate access for outside researchers to study their environments, searching for unknown threats."

Credit-Risk Models Based on Machine Learning: A ‘Middle-of-the-Road’ Solution

The low explainability of ML-driven models for credit risk remains, perhaps, their greatest drawback. A visual inspection of, say, a random forest is impossible, and although there are some tools (like feature importance) that provide information about the inner workings of this type of model, ML model logic is significantly more complicated than that of a traditional logistic regression approach. However, we’re increasingly seeing “middle-of-the-road” solutions that incorporate ML-engineered features within an easier-to-explain logistic regression model. Under this approach, ML is used to select highly-predictive features (for, say, probability of default), which are then integrated with the so-called “logit” model. This hybrid model would include both original and ML-engineered features, and an automated algorithm would select the features for forecasting PD. Performance-driven features can be added to this model through Sequential Forward Selection (SFS), one of the most widely-used algorithms for feature selection. 

DevOps Productivity: Have We Reached Its Limits?

As we have established, DevOps engineers are not babysitters. They are highly qualified and talented engineers who thrive by building new and innovative technologies. The grunt work of cloud management, therefore, is often seen as an obstacle to DevOps productivity as it requires constant monitoring, configuration and adjustments. It doesn’t help that much of this work is impossible to do 100% effectively. Thankfully, there is a better way. AI automation is perfectly suited to handle repetitive, routine tasks such as analyzing real-time data, predicting future scale, adjusting infrastructure to accommodate changes in requirements and more. Plus, it can do all of this with perfect accuracy. DevOps teams cannot be as productive as they want if they are constantly putting out fires in their cloud infrastructure. By automating the tasks they don’t like doing anyway, your cloud stays fully optimized while your DevOps engineers are able to work more efficiently on what they enjoy most.

The three ingredients a software solution for digital payment needs

Above all, payment security is the main priority for consumers when it comes to payments. Digital payment solutions need to be transparent and compliant with regulations. As the cryptocurrency industry is growing, governments are taking note and implement stricter regulations. Those regulations in turn demand higher degrees of compliance and possibly license requirements. SMEs will want to avoid the inherent volatility risk of cryptocurrencies. With the right technology, this is also possible: the purchase amount paid is credited to the merchant in fiat currency as usual, even if the customer pays using cryptocurrency — unless, of course, the merchant prefers to keep the purchase amount as cryptocurrency. In some countries, such as Germany, regulators have introduced specific legislation to oversee cryptocurrency custodians. As such, to date, the lack of regulated and supervised custody solutions has been a barrier to entry for SMEs accepting digital asset payments. Confusion on who to choose as the right partner has been common and a huge concern for regulatory-compliant institutions.

Cybersecurity spending is a battle: Here's how to win

It can be difficult to get the board's full attention, especially if cybersecurity is seen purely as an outgoing with little benefit to the bottom line. The best way to address this is to explain, in plain language, the potential threats out there. It could even be a good idea for a CISO to run an exercise to demonstrate the potential impact of a cyber incident. This shouldn't be over-dramatised, but presenting the board with an exercise based around a real-life ransomware incident, for example, and explaining how a similar attack could affect the company could open a few eyes, showing what measures need to be taken. This could then lead to extra budget being released. "One of the best ways to get their attention is to conduct a very thoughtful ransomware exercise. Pick something very realistic and allow your executive team to walk through the decision-making process," says Theresa Payton, CEO of Fortalice Solutions and former chief information officer (CIO) at The White House. 

Wanted: Meaningful Business Insights

Companies able to pivot attention to the quality of insights, not just the quantity of data collected, are starting to reap the rewards of data-driven business. A prominent oil and gas company that spent more than five years trying to wrangle traditional analytics solutions to get insights on common metrics like on-time and full deliveries or days payable outstanding (DPO) was able to move beyond forensic insights to predictive analysis. Specifically, it was able to achieve a greater than 40% reduction in inventory on-hand carrying costs by linking inventory use data with actual planning parameters using the tools of a context-rich data model. Similarly, a major manufacturer was able to improve its on-time delivery metrics from the low 80th percentile to the mid-90th percentile by connecting the dots between production capabilities and shipment results, and making the necessary adjustments based on the insights. In the retail space, companies could categorize the effective window for seasonal or perishable goods—each with limited shelf life—to dramatically reduce obsolete inventory.

What Can the UK Learn From the US Infrastructure Bill Crypto Debacle?

We’re also seeing overreach and wildly sporadic regulatory moves from non-governing bodies, (e.g. the SEC’s random targeting of Coinbase’s P2P lending product), who are scrambling to make sense of this technology while concurrently falling behind even some of the smallest nation-states on earth. Even more, interestingly, the provision was challenged by a coalition from both the left and right of the House. Crypto is not a political movement as Jackson Palmer, one of the creators of Dogecoin, had recently accused it of being. It is a societal movement. It comes as no surprise that Cynthia Lummis, Wyoming’s Senator, was the driving force behind killing the bill. Wyoming has been incredibly supportive of crypto for years now. It was the first state to have a crypto bank and the first to legally recognise a Decentralised Autonomous Organisation, a business that uses blockchain to govern itself without the intervention of a central authority.So too was Ted Cruz, the Republican Senator for Texas.

HAProxy urges users to update after HTTP request smuggling vulnerability found

"This vulnerability has the potential to have a wide-spread impact, but fortunately, there are plenty of ways to mitigate the risk posed by this HAProxy vulnerability, and many users most likely have already taken the necessary steps to protect themselves," Bar-Dayan told ZDNet. "CVE-2021-40346 is mitigated if HAProxy has been updated to one of the latest four versions of the software. Like with most vulnerabilities, CVE-2021-40346 can't be exploited without severe user negligence. The HAProxy team has been responsible in their handling of the bug. Most likely, the institutional cloud and application services that use HAProxy in their stack have either applied upgrades or made the requisite configuration changes by now. Now it is up to all HAProxy users to run an effective vulnerability remediation program to protect their businesses from this very real threat." Michael Isbitski, the technical evangelist at Salt Security, added that HAProxy is a multi-purpose, software-based infrastructure component that can fulfill a number of networking functions, including load balancer, delivery controller, SSL/TLS termination, web server, proxy server and API mediator.

Quote for the day:

"Leadership is practices not so much in words as in attitude and in actions." -- Harold Geneen