Daily Tech Digest - September 13, 2021

4 Steps for Fostering Collaboration Between IT Network and Security Teams

Collaboration requires a single source of truth or shared data that's reliable and accessible to all involved. If one team is working with outdated information, or a different type of data entirely, it won't be on the same page as the other team. Likewise, if one team lacks specific details, such as visibility into a public cloud environment, it won't be an effective partner. Unfortunately, many enterprise-level organizations struggle with data control conflicts because individual teams can be overly protective of data they extract. As a result, what is shared is sometimes inconsistent, irrelevant, or out of date. At the same time, many network and security tools are already leveraging the same data, such as network packets, flows, and robust sets of metadata. This network-derived data, or "smart data," must support workflows without requiring management tool architects to cobble together multiple secondary data stores to prop it up. Consequently, network and security teams should find ways to unify their data collection and the tools they use for analysis wherever possible to overcome sharing issues.


A guide to sensor technology in IoT

There is still plenty of room for IoT sensor technology to grow, and further disrupt multiple industries, in the coming years. With a hybrid working model set to continue being common among businesses, the use of IoT sensors can enable employees that choose not to work on company premises to carry out tasks remotely. Meanwhile, as smart cities continue developing, IoT sensors will also remain a big part of the lives of citizens. With national infrastructures involving IoT sensors in the works around the world, businesses will be able to benefit from increased connectivity and decreased costs, while being able to cut carbon emissions as national and global sustainability targets loom. The roll-out of 5G also promises to boost the IoT space, with more and more device varieties set to be compatible with the burgeoning wireless technology. This won’t mean that LPWAN will lose its relevance, however — organisations will still find valuable uses for smaller amounts of data that may be easier to manage and transfer between devices. There is the breakthrough of standards such as LTE-M and NB-IoT to consider here, as well.


Real-time Point-of-Sale Analytics With a Data Lakehouse

Different processes generate data differently within the POS. Sales transactions are likely to leave a trail of new records appended to relevant tables. Returns may follow multiple paths triggering updates to past sales records, the insertion of new, reversing sales records and/or the insertion of new information in returns-specific structures. Vendor documentation, tribal knowledge and even some independent investigative work may be required to uncover exactly how and where event-specific information lands within the POS. Understanding these patterns can help build a data transmission strategy for specific kinds of information. Higher frequency, finer-grained, insert-oriented patterns may be ideally suited for continuous streaming. Less frequent, larger-scale events may best align with batch-oriented, bulk data styles of transmission. But if these modes of data transmission represent two ends of a spectrum, you are likely to find most events captured by the POS fall somewhere in between. The beauty of the data lakehouse approach to data architecture is that multiple modes of data transmission can be employed in parallel.


The human factor in cybersecurity

People are creatures of habit who seek out shortcuts and efficiencies. If I write a 5-step process for logging in to my most secure system, at least one person will email me explaining how they found a shortcut. And there will be many who complain about having to wait 4 seconds as their login is verified. I know this, and so I push back on my team when they establish new protocols. Can we make this easier? Can we use a tool – like multi-factor authentication or PIV cards? Can we eliminate irritating parts of cybersecurity? Yes, the solutions might cost more, but the benefit is compliance. I never want to create a system that has my people jotting weekly passwords on post-it notes. So, I ask my security team to think like a busy employee, a hurried exec, and a distracted engineer – and remove complexity from our routines. Cybersecurity measures take time to work, but human brains process faster. We might accept that implementing an excellent cyber program and maintaining cyber hygiene —like basic email scanning or link scanning—adds a layer of inefficiency; however, this is often a difficult concept for employees. 


Now Is The Time To Update Your Risk Management Strategy And Prioritize Cybersecurity

It’s clear that cybersecurity threats are real for companies of all types and sizes, and so if there is one area of risk management businesses can strengthen this year, it should be this. The good news is that many companies are doing just that. A recent OnePoll survey of 375 senior-level IT security professionals, commissioned by my company, confirms this. Respondents in our survey indicated that recent data breaches, like SolarWinds, are impacting the way their organizations prioritize cybersecurity. Nearly all respondents believe cybersecurity is considered a top business risk within their organizations, and 82% say these breaches have either greatly or somewhat impacted the way their organization prioritizes cybersecurity. The U.K.’s Department for Digital, Culture, Media and Sport commissioned another survey that underscores these findings. They found that 77% of businesses say cybersecurity is “a high priority for their directors or senior managers.” This prioritization is also turning into real investment into cybersecurity measures by businesses, which means company leaders are walking the talk.


7 Microservices Best Practices for Developers

At times, it might seem to make sense for different microservices to access data in the same database. However, a deeper examination might reveal that one microservice only works with a subset of database tables, while the other microservice only works with a completely different subset of tables. If the two subsets of data are completely orthogonal, this would be a good case for separating the database into separate services. This way, a single service depends on its dedicated data store, and that data store's failure will not impact any service besides that one. We could make an analogous case for file stores. When adopting a microservices architecture, there's no requirement for separate microservices to use the same file storage service. Unless there's an actual overlap of files, separate microservices ought to have separate file stores. With this separation of data comes an increase in flexibility. For example, let's assume we had two microservices, both sharing the same file storage service with a cloud provider. One microservice regularly touches numerous assets but is small in file size.


Why the ‘accidental hybrid’ cloud exists, and how to manage it

With many security tools designed for an on-premises world, they can lack the application-level insight needed to positively impact digital services. Businesses are therefore inevitably becoming more vulnerable to cyber attacks, especially as an ‘accidental hybrid’ environment makes it challenging to accurately monitor traffic or detect potential threats. If a SecOps team has a ‘clouded’ vision into the cloud environment, they may be forced to rely only on trace files or application logs that ultimately provide a less than perfect view into the network. What’s more, with the pervasive issue of the digital skills shortage, there are a significant lack of experts that truly understand how to secure the hybrid cloud environment. As long as a visibility strategy is prioritised, network automation becomes an invaluable solution to overcoming the issues of overstretched security professionals and the increasing ‘threatscape’. While it may have seemed a daunting process in the past, automation of data analysis is now surprisingly simple and can be integral for gaining better insight and, in turn, mitigating attacks.


How Quantifying Information Leakage Helps to Protect Systems

The first and most important step is to identify the high value secrets that your system is protecting. Not all assets need the same degree of protection. The next step is to identify observable information that could be correlated to your secret. Try to be as comprehensive as possible, considering time, electrical output, cache states, and error messages. Once you have identified what an attacker could observe, a good preventative measure is to disassociate this observable information from your sensitive information. For example, if you notice that a program processing some sensitive information takes longer with one input than another, you can take steps to standardize the processing time. You do not want to give an attacker any hints. Next, I suggest threat modeling. Identify the goals, abilities, and rewards of possible attackers. Establishing what your adversary considers "success" could inform your system design. Finally, depending on your resources, you can approximate the distribution of your secrets. 


How to explain DevSecOps in plain English

DevSecOps extends the same basic principle to security: It shouldn’t be the sole responsibility of a group of analysts huddled in a Security Operations Center (SOC) or a testing team that doesn’t get to touch the code until just before it gets deployed. That was the dominant model in the software delivery pipelines of old: Security was a final step, rather than something considered at every step. And that used to be at least passable, for the most part. As Red Hat's DevSecOps primer notes, “That wasn’t as problematic when development cycles lasted months or even years, but those days are over.” Those days are most definitely over. That final-stage model simply didn’t account for cloud, containers, Kubernetes, and a wealth of other modern technologies. And regardless of a particular organization’s technology stack or development processes, virtually every team is expected to ship faster and more frequently than in the past. At its core, the role of security is quite simple: Most systems are built by people, and people make mistakes. 


End your meeting with clear decisions and shared commitment

In many cases, participants do the difficult, creative work of diagnosing issues, analyzing problems, and brainstorming new ideas but don’t reap the fruits of their labor because they fail to translate insights into action. Or, with the end of the meeting looming—and team members needing to get to their next meeting, pick up kids from school, catch a train, and so on—leaders rush to devise a plan. They press people into commitments they have not had time to think through—and then can’t (or won’t) keep to. Either of these mistakes can result in an endless cycle of meetings without solutions, leaving people feeling frustrated and cynical. Here are four strategies that can help leaders avoid these detrimental outcomes, and instead foster a sense of clarity and purpose. ... The key to this strategy: to prepare for an effective close, leaders should “cue” the group to start narrowing the options, ideas, or solutions on the table, whether it means going from ten job candidates to three or selecting the top few messages pitched for a new brand campaign. The timing for this cue varies based on the desired meeting outcomes, but it is usually best to start narrowing about halfway through the allotted time.



Quote for the day:

People seldom improve when they have no other model but themselves. -- Oliver Goldsmith

Daily Tech Digest - September 12, 2021

How to develop a two-tiered security model for the hybrid work paradigm

Providing organizations and their stakeholders complete digital security is a part of the holistic security culture that enterprises must inculcate. This is how they can ensure that the work paradigm of the future is anchored by safety and technological progression on the back of a top-down security culture. Organizations must promote the belief that upholding digital security requirements isn’t the responsibility of the security department alone. A sustainable security culture requires a collective investment from all stakeholders in the organization. A vision that treats security as a non-negotiable asset, complemented by employee sensitization and training practices, is necessary for the safekeeping of valuable data and prevention against exploitation of vulnerabilities by threat actors. To drive optimal results, administrators must make sure that the mechanics used to deliver security training to employees account for different departments, learning styles, and abilities. Employees are the bedrock of any organization. Employee errors are common when they are unsupervised, anxious, or uneducated in matters pertaining to organizational security. 


5 Habits I Learned From Successful Data Scientists at Microsoft

Continuous learning and improvement are paramount for Data Scientists looking to stand out from the crowd of other qualified data professionals. As many already know Data Science is not a static field. Look at job descriptions, find out what skills most employers are looking for in a data scientist, and compare with your resume. Are you lacking these skills? Identify your weak points and work towards improvement. ... It’s not just about models and programming languages; it is paramount that you understand the inner workings of your profession. The truth is if you are depending on the tricks and experience you’ve gathered from your previous or current job, there are massive tendencies that you will remain professionally stagnant. ... There are hundreds of quality research papers, books, articles, and magazines exhibiting valuable Data Science resources to educate yourself and expand your knowledge about certain concepts in your field. Before I moved on to get my Data Science certification, I learned most of the programming languages and analysis tricks from blog posts.


Yandex Pummeled by Potent Meris DDoS Botnet

“Yandex’ security team members managed to establish a clear view of the botnet’s internal structure. L2TP [Layer 2 Tunneling Protocol] tunnels are used for internetwork communications. The number of infected devices, according to the botnet internals we’ve seen, reaches 250,000,” wrote Qrator in a Thursday blog post. L2TP is a protocol used to manage virtual private networks and deliver internet services. Tunneling facilitates the transfer of data between two private networks across the public internet. Yandex and Qrato launched an investigation into the attack and believe the MÄ“ris to be highly sophisticated. “Moreover, all those [compromised MikroTik hosts are] highly capable devices, not your typical IoT blinker connected to Wi-Fi – here we speak of a botnet consisting of, with the highest probability, devices connected through the Ethernet connection – network devices, primarily,” researchers wrote. ... While patching MikroTik devices is the most ideal mitigation to combat future MÄ“ris attacks, researchers also recommended blacklisting.


Consistency, Coupling, and Complexity at the Edge

Although RESTful APIs are easy for backend services to call, they are not so easy for frontend applications to call. That is because an emotionally satisfying user experience is not very RESTful. Users don’t want a GUI where entities are nicely segmented. They want to see everything all at once unless progressive disclosure is called for. For example, I don’t want to navigate through multiple screens to review my travel itinerary; I want to see the summary (including flights, car rental, and hotel reservation) all on one screen before I commit to making the purchase. When a user navigates to a page on a web app or deep links into a Single Page Application (SPA) or a particular view in a mobile app, the frontend application needs to call the backend service to fetch the data needed to render the view. With RESTful APIs, it is unlikely that a single call will be able to get all the data. Typically, one call is made, then the frontend code iterates through the results of that call and makes more API calls per result item to get all the data needed.

Facebook Researcher’s New Algorithm Ushers New Paradigm Of Image Recognition

Humans have an innate capability to identify objects in the wild, even from a blurred glimpse of the thing. We do this efficiently by remembering only high-level features that get the job done (identification) and ignoring the details unless required. In the context of deep learning algorithms that do object detection, contrastive learning explored the premise of representation learning to obtain a large picture instead of doing the heavy lifting by devouring pixel-level details. But, contrastive learning has its own limitations. According to Andrew Ng, pre-training methods can suffer from three common failings: generating an identical representation for different input examples, generating dissimilar representations for examples that humans find similar (for instance, the same object viewed from two angles), and generating redundant parts of a representation. The problems of representation learning, wrote Andrew Ng, boil down to variance, invariance, and covariance issues.


How AI Is Changing the IT and AV Industries

When AI can take visual, auditory, and human speech information and generate speech in return, it will need to be able to make decisions. As an example, AI-based systems may be able to process behavioral patterns on smartphone applications and then convert that information into a decision to tweak the user experience to enhance the effectiveness of the application. Another great way for AI to make decisions and change the IT industry is to participate in defect analysis and efficiency analysis. Some AI may be able to assess protocols or infrastructure and determine where defects may exist in the system and then determine the best solutions to increase efficiency. Another consideration is for AI to collect lots of data and generate solutions to improve efficiency over time, even without the presence of a defect. AI being able to create and offer solutions is quickly changing the IT industry for the better, making it more efficient and helpful in the long term. Obviously, the introduction of AI in machines allows for automation at multiple process stages. 


DeepMind aims to marry deep learning and classic algorithms

Algorithms are a really good example of something we all use every day, Blundell noted. In fact, he added, there aren’t many algorithms out there. If you look at standard computer science textbooks, there’s maybe 50 or 60 algorithms that you learn as an undergraduate. And everything people use to connect over the internet, for example, is using just a subset of those. “There’s this very nice basis for very rich computation that we already know about, but it’s completely different from the things we’re learning. So when Petar and I started talking about this, we saw clearly there’s a nice fusion that we can make here between these two fields that has actually been unexplored so far,” Blundell said. The key thesis of NAR research is that algorithms possess fundamentally different qualities to deep learning methods. And this suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.


SolarWinds Attack Spurring Additional Federal Investigations

Right now, the SEC investigation appears fairly broad and could reveal other cyber incidents involving these companies, including past data breaches and ransomware attacks, says Austin Berglas, who formerly was an assistant special agent in charge of cyber investigations at the FBI's New York office. "This [inquiry] could potentially include forensic and investigative reports of past, unreported incidents and could bring the topic of attorney privilege into play," says Berglas, who is now global head of professional services at cybersecurity firm BlueVoyant. "If there is no evidence of [personally identifiable information] exposure, organizations are not mandated to disclose the incident. However, not all investigations are black-and-white. Sometimes evidence is destroyed, unavailable or corrupted, and confirmation of the exposure of sensitive information may not be obtainable upon forensic analysis." While some companies will err on the side of caution and publish data related to breaches, others might not, and Berglas says the SEC might be probing to see which companies are following federal or state laws when it comes to disclosures.


Implementing enterprise transformation using TOGAF

TOGAF includes the concept of "target first" and "baseline first." This can help us in our decision on where to start. If we know how we want the future state to look like, we could begin with the target first and work our way back to the baseline. If we are not sure what we want the future state to look like, we could begin with the baseline and work our way to the target state. Regardless of which path you choose; in the end you need to have both the baseline and target well defined. What we are looking for is the gap between what we have and what we need. And it is within that gap that the enterprise transformation is defined and takes place. The baseline provides us with information on our current state. The target provides us with information on what we would like to achieve at the end of the transformation. With this information, we can put together a transformation roadmap and the ability to measure our progress/success in achieving the target state. Enterprise architecture is a discipline to lead enterprise responses proactively and holistically to disruptive forces by identifying and analysing the execution of change toward desired business vision and outcomes. 


How new banking technology platforms will redefine the future of financial services

The evolution of fintech over the last five years has been quite dramatic in that they have devised new operating and business models that are changing the landscape. They are doing so by bringing in differentiated specialisation in a specific area, which traditional banks are unable to match. For example, there are a few who have created a business around becoming a ‘trusted advisor’ to consumers offering valuable guidance to them on their financial needs and enabling them to make the best choice on financial products and services. Banks which were hitherto aligned to an exclusive sourcing arrangement with a partner now have to contend with integrating seamlessly with these ‘advisors’ and participate in their competitive marketplace to acquire more customers. Not doing so is increasingly not an option, as consumer behaviour is steadily evolving to demand such experiences, and banks cannot provide these on their own. And this is truly open banking. While there are no regulatory obligations as of yet to participate in an open banking framework within India, it is a matter of time before this becomes essential in the backdrop of RBI’s account aggregator guidelines expected to come into effect soon.



Quote for the day:

"One man with courage makes a majority." -- Andrew Jackson

Daily Tech Digest - September 11, 2021

This Hardware-Level Security Solution for SSDs Can Help Prevent Ransomware Attacks

Dubbed the SSD Insider++ technology, the new security solution can be integrated into SSDs at the hardware level. So, the ransomware prevention feature will be built right into the SSD drives and will automatically detect unusual encryption activities that are not user-triggered. Now, getting into some technical details, the SSD Insider++ technology uses the inherent writing and deletion mechanisms in NAND flash to perform its task of preventing ransomware attacks. It leverages the SSD controller to continuously monitor the activity of the storage drive. The system triggers when any encryption workload is detected that is not initiated by the authorized user. In that case, the firmware prevents the SSD to take any write requests, which in turn suspends the encryption process. The system then notifies the user about abnormal encryption activities via its companion app. The app also allows users to recover any data that was encrypted before the system stopped ongoing the process.


Graph Databases VS Relational Databases – Learn How a Graph Database Works

Graph databases are a type of “Not only SQL” (NoSQL) data store. They are designed to store and retrieve data in a graph structure. The storage mechanism used can vary from database to database. Some GDBs may use more traditional database constructs, such as table-based, and then have a graph API layer on top. Others will be ‘native’ GDBs – where the whole construct of the database from storage, management and query maintains the graph structure of the data. Many of the graph databases currently available do this by treating relationships between entities as first class citizens. There are broadly two types of GDB, Resource Descriptive Framework (RDF)/triple stores/semantic graph databases, and property graph databases. An RDF GDB uses the concept of a triple, which is a statement composed of three elements: subject-predicate-object. Subject will be a resource or nodes in the graph, object will be another node or literal value, and predicate represents the relationship between subject and object. 


Microsoft Warns of Cross-Account Takeover Bug in Azure Container Instances

An attacker exploiting the weakness could execute malicious commands on other users' containers, steal customer secrets and images deployed to the platform. The Windows maker did not share any additional specifics related to the flaw, save that affected customers "revoke any privileged credentials that were deployed to the platform before August 31, 2021." Azure Container Instances is a managed service that allows users to run Docker containers directly in a serverless cloud environment, without requiring the use of virtual machines, clusters, or orchestrators. ... "This discovery highlights the need for cloud users to take a 'defense-in-depth' approach to securing their cloud infrastructure that includes continuous monitoring for threats — inside and outside the cloud platform," Unit 42 researchers Ariel Zelivanky and Yuval Avrahami said. "Discovery of Azurescape also underscores the need for cloud service providers to provide adequate access for outside researchers to study their environments, searching for unknown threats."


Credit-Risk Models Based on Machine Learning: A ‘Middle-of-the-Road’ Solution

The low explainability of ML-driven models for credit risk remains, perhaps, their greatest drawback. A visual inspection of, say, a random forest is impossible, and although there are some tools (like feature importance) that provide information about the inner workings of this type of model, ML model logic is significantly more complicated than that of a traditional logistic regression approach. However, we’re increasingly seeing “middle-of-the-road” solutions that incorporate ML-engineered features within an easier-to-explain logistic regression model. Under this approach, ML is used to select highly-predictive features (for, say, probability of default), which are then integrated with the so-called “logit” model. This hybrid model would include both original and ML-engineered features, and an automated algorithm would select the features for forecasting PD. Performance-driven features can be added to this model through Sequential Forward Selection (SFS), one of the most widely-used algorithms for feature selection. 


DevOps Productivity: Have We Reached Its Limits?

As we have established, DevOps engineers are not babysitters. They are highly qualified and talented engineers who thrive by building new and innovative technologies. The grunt work of cloud management, therefore, is often seen as an obstacle to DevOps productivity as it requires constant monitoring, configuration and adjustments. It doesn’t help that much of this work is impossible to do 100% effectively. Thankfully, there is a better way. AI automation is perfectly suited to handle repetitive, routine tasks such as analyzing real-time data, predicting future scale, adjusting infrastructure to accommodate changes in requirements and more. Plus, it can do all of this with perfect accuracy. DevOps teams cannot be as productive as they want if they are constantly putting out fires in their cloud infrastructure. By automating the tasks they don’t like doing anyway, your cloud stays fully optimized while your DevOps engineers are able to work more efficiently on what they enjoy most.


The three ingredients a software solution for digital payment needs

Above all, payment security is the main priority for consumers when it comes to payments. Digital payment solutions need to be transparent and compliant with regulations. As the cryptocurrency industry is growing, governments are taking note and implement stricter regulations. Those regulations in turn demand higher degrees of compliance and possibly license requirements. SMEs will want to avoid the inherent volatility risk of cryptocurrencies. With the right technology, this is also possible: the purchase amount paid is credited to the merchant in fiat currency as usual, even if the customer pays using cryptocurrency — unless, of course, the merchant prefers to keep the purchase amount as cryptocurrency. In some countries, such as Germany, regulators have introduced specific legislation to oversee cryptocurrency custodians. As such, to date, the lack of regulated and supervised custody solutions has been a barrier to entry for SMEs accepting digital asset payments. Confusion on who to choose as the right partner has been common and a huge concern for regulatory-compliant institutions.

Cybersecurity spending is a battle: Here's how to win

It can be difficult to get the board's full attention, especially if cybersecurity is seen purely as an outgoing with little benefit to the bottom line. The best way to address this is to explain, in plain language, the potential threats out there. It could even be a good idea for a CISO to run an exercise to demonstrate the potential impact of a cyber incident. This shouldn't be over-dramatised, but presenting the board with an exercise based around a real-life ransomware incident, for example, and explaining how a similar attack could affect the company could open a few eyes, showing what measures need to be taken. This could then lead to extra budget being released. "One of the best ways to get their attention is to conduct a very thoughtful ransomware exercise. Pick something very realistic and allow your executive team to walk through the decision-making process," says Theresa Payton, CEO of Fortalice Solutions and former chief information officer (CIO) at The White House. 

Wanted: Meaningful Business Insights

Companies able to pivot attention to the quality of insights, not just the quantity of data collected, are starting to reap the rewards of data-driven business. A prominent oil and gas company that spent more than five years trying to wrangle traditional analytics solutions to get insights on common metrics like on-time and full deliveries or days payable outstanding (DPO) was able to move beyond forensic insights to predictive analysis. Specifically, it was able to achieve a greater than 40% reduction in inventory on-hand carrying costs by linking inventory use data with actual planning parameters using the tools of a context-rich data model. Similarly, a major manufacturer was able to improve its on-time delivery metrics from the low 80th percentile to the mid-90th percentile by connecting the dots between production capabilities and shipment results, and making the necessary adjustments based on the insights. In the retail space, companies could categorize the effective window for seasonal or perishable goods—each with limited shelf life—to dramatically reduce obsolete inventory.


What Can the UK Learn From the US Infrastructure Bill Crypto Debacle?

We’re also seeing overreach and wildly sporadic regulatory moves from non-governing bodies, (e.g. the SEC’s random targeting of Coinbase’s P2P lending product), who are scrambling to make sense of this technology while concurrently falling behind even some of the smallest nation-states on earth. Even more, interestingly, the provision was challenged by a coalition from both the left and right of the House. Crypto is not a political movement as Jackson Palmer, one of the creators of Dogecoin, had recently accused it of being. It is a societal movement. It comes as no surprise that Cynthia Lummis, Wyoming’s Senator, was the driving force behind killing the bill. Wyoming has been incredibly supportive of crypto for years now. It was the first state to have a crypto bank and the first to legally recognise a Decentralised Autonomous Organisation, a business that uses blockchain to govern itself without the intervention of a central authority.So too was Ted Cruz, the Republican Senator for Texas.

HAProxy urges users to update after HTTP request smuggling vulnerability found

"This vulnerability has the potential to have a wide-spread impact, but fortunately, there are plenty of ways to mitigate the risk posed by this HAProxy vulnerability, and many users most likely have already taken the necessary steps to protect themselves," Bar-Dayan told ZDNet. "CVE-2021-40346 is mitigated if HAProxy has been updated to one of the latest four versions of the software. Like with most vulnerabilities, CVE-2021-40346 can't be exploited without severe user negligence. The HAProxy team has been responsible in their handling of the bug. Most likely, the institutional cloud and application services that use HAProxy in their stack have either applied upgrades or made the requisite configuration changes by now. Now it is up to all HAProxy users to run an effective vulnerability remediation program to protect their businesses from this very real threat." Michael Isbitski, the technical evangelist at Salt Security, added that HAProxy is a multi-purpose, software-based infrastructure component that can fulfill a number of networking functions, including load balancer, delivery controller, SSL/TLS termination, web server, proxy server and API mediator.



Quote for the day:

"Leadership is practices not so much in words as in attitude and in actions." -- Harold Geneen

Daily Tech Digest - September 10, 2021

AI as a service to solve your business problems? Guess again

Companies seeking to use AI as a differentiating technology in order to gain business advantages — and not merely doing it because that’s what everyone else is doing — require planning and strategy, and that almost always means a customized solution. In the words of Sepp Hochreiter (inventor of LSTM, one of the world’s most famous and successful AI algorithms), “the ideal combination for the best time to market and lowest risk for your AI projects is to slowly build a team and use external proven experts as well. No one can hire the best talent quickly, and even worse, you cannot even judge the quality during hiring but will only find out years later.” That’s a far cry from what most online off-the-shelf AI services offer today. The artificial intelligence technology offered by AIaaS comes in two flavors — and the predominant one is a very basic AI system that claims to provide a “one-size-fits-all” solution for all businesses. Modules offered by AI service providers are meant to be applied, as-is, to anything from organizing a stockroom to optimizing a customer database to preventing anomalies in production of a multitude of products.


Let’s Redefine “Productivity” for the Hybrid Era

Despite the burnout so many of us feel, the hybrid environment offers an opportunity to create a more sustainable approach to work. Remote and in-person work both have distinct advantages and disadvantages, and rather than expecting the same outcomes from each, we can build on what makes them unique. When in the office, prioritize relationships and collaborative work like brainstorming around a whiteboard. When working from home, encourage people to design their days to include other priorities such as family, fitness, or hobbies. They should take a nap if they need one and step outside between meetings. Brain studies show that even five-minute breaks between remote meetings help people think more clearly and reduce stress. Likewise, watch out for the risks each type of work carries with it. People can avoid the long commutes they used to have by staggering their schedules to avoid traffic. Encourage them to set boundaries at home so they don’t work every hour of the day just because they can. The trick is finding what works for each individual. 


DevOps Is Not Automation

A highly-evolved DevOps team isn’t just about automating processes; it’s about eliminating production roadblocks. Automating processes without making changes to how your teams communicate is just moving the roadblocks around. A key first step to truly effective DevOps is to synchronize development and operations teams--teams that in traditional tech culture are siloed--and in fact, often at odds. Forte Group points out that that typically, development teams are incentivized to push things forward (get their deliverables in on time) and quality assurance teams and system administrators are incentivized to minimize disruptions (which often means pushing back deadlines to focus on a quality product). In order to create a culture where continuous development is possible, these teams have to think of their work as sharing an objective. Additionally, they need to communicate frequently and effectively. DevOps also requires a shift from one big deliverable at the end of a long development period to small, incremental deployments that happen regularly and are constantly being monitored and adjusted.


Who Should Own The Job Of Observability In DevOps?

Observability helps answer any question. Of course, this applies to troubleshooting as well as helping users address the unknowns inside today’s complex business systems. With observability, companies can continuously monitor and react to issues or faults. Although observability may seem like the new buzzword in IT, it actually isn’t new at all. The term came about as part of the evolution of monitoring. As organizations began to move toward the cloud and microservice applications, they needed a strategy that enabled them to monitor at scale, along with answering the questions that were not defined during the implementation of the monitoring system. Observability improves the way we collect data and provides the data necessary to drive digital businesses forward. ... Great monitoring tools count for little if people don’t know how to use them properly. Organizations can have too many tools, owned by different teams, so there’s a challenge around the selection and ownership of specific tools within an organization. Organizations must be sure to take the necessary steps of clearly communicating to developers their roles and responsibilities and options available for them to solve the observability challenge. 


Tooling Network Detection & Response for Ransomware

If ransomware is given too much time on the network, even if it doesn’t gain access to your most critical data, it could have an impact on day-to-day operations. By tracking the ransomware’s lateral movement, organizations can see where it moved, and, more importantly, which machines were infected. Doing so reduces the number of machines infected and thus reduces the time to recovery. Tracking lateral movement is only as good as the data being collected. When new machines or new employees connect to the network, organizations should start monitoring those connections right away. Doing so will provide the most visibility and will enable the organization to track malicious movement from all devices on the network. Additionally, understanding how malicious software is connecting throughout your network requires having an NDR system capable of collecting network flow data and analyzing it. By leveraging flow data, organizations can quickly determine where ransomware—and other malware—are moving across the network. 


Why do humans learn so much faster than machine learning models?

Strides have been made in enabling ML models to mimic the kind of understanding humans have. A great and frankly magical example are word embeddings. ... Word embeddings are a way to represent text data as numbers, needed if you want to feed the text into an ML model. Word embeddings represent each word using say 50 features. Words that are close together in this 50 dimensional space are similar in meaning, for example apple and orange. The challenge we face is how to construct these 50 features. Multiple approaches have been proposed, but in this article we focus on Glove word embeddings. Glove word embeddings are derived from a co-occurence matrix of the words in a corpus. If words occur in the same textual context, Glove assumes they are similar in meaning. This already presents the first hint that word embeddings learn an understanding of the corpus they train on. If in a given context a lot of fruits are used, the word embeddings will know apple would fit in that place.


Observability is key to the future of software (and your DevOps career)

Observability platforms enable you to easily figure out what’s happening with every request and to identify the cause of issues fast. Learning the principles of observability and OpenTelemetry will set you apart from the crowd and provide you with a skill set that will be in increasing demand as more companies perform cloud migrations. From an end-user perspective, “telemetry” can be a scary-sounding word, but in observability, telemetry describes its three primary pillars of data: metrics, traces, and logs. This data from your applications and infrastructure is called ‘telemetry,’ and it’s the foundation of any monitoring or observability system. OpenTelemetry is an industry standard for instrumenting applications to provide this telemetry, collecting it across the infrastructure and emitting it to an observability system. ... As an engineer, the best way to get started with something is to get your hands dirty. As someone who works for a commercial observability vendor, I’d be remiss to not tell you to try a free trial of Splunk Observability Cloud—there’s no credit card required and the integration wizards that walk you through setup actually have you integrate your architecture with OpenTelemetry.


Service Mesh Ultimate Guide - Second Edition: Next Generation Microservices Development

Broadly speaking, the data plane “does the work” and is responsible for “conditionally translating, forwarding, and observing every network packet that flows to and from a [network endpoint].” In modern systems, the data plane is typically implemented as a proxy, (such as Envoy, HAProxy, or MOSN), which is run out-of-process alongside each service as a “sidecar.” Linkerd uses a micro-proxy approach that’s optimized for the service mesh sidecar use cases. A control plane “supervises the work,” and takes all the individual instances of the data plane—a set of isolated stateless sidecar proxies—and turns them into a distributed system. The control plane doesn’t touch any packets/requests in the system, but instead, it allows a human operator to provide policy and configuration for all of the running data planes in the mesh. The control plane also enables the data plane telemetry to be collected and centralized, ready for consumption by an operator. 


‘Azurescape’ Kubernetes Attack Allows Cross-Container Cloud Compromise

In the multitenant architecture, each customer’s container is hosted in a Kubernetes pod on a dedicated, single-tenant node virtual machine (VM), according to the analysis, and the boundaries between customers are enforced by this node-per-tenant structure. “Since practically anyone can deploy a container to the platform, ACI must ensure that malicious containers cannot disrupt, leak information, execute code or otherwise affect other customers’ containers,” explained researchers. “These are often called cross-account or cross-tenant attacks.” The Azurescape version of such an attack has two prongs: First, malicious Azure customers/adversaries must escape their container; then, they must acquire a privileged Kubernetes service account token that can be used to take over the Kubernetes API server. The API Server provides the frontend for a cluster’s shared state, through which all of the nodes interact, and it’s responsible for processing commands within each node by interacting with Kubelets. Each node has its own Kubelet, which is the primary “node agent” that handles all tasks for that specific node.


The impact of ransomware on cyber insurance driving the need for broader cybersecurity knowledge

Effective security operations are critical to minimizing both the likelihood and the impact of a cyberattack. Disparate tools will not fix the effectiveness problem facing organizations across the globe, nor will they stand up to risk assessments and external insurer requirements. An effective security operations strategy provides risk management leaders the foundation to confidently negotiate with insurance providers and set a long-term cybersecurity agenda that protects the entire business. For insurance providers, there is an opportunity to partner with security operations experts to expand their cybersecurity expertise, to allow for more precise, accurate calculations for policyholders. Cyber insurers and security operations professionals must break down silos and recognize that together, they have a unique opportunity to coordinate effectively to better protect businesses. ... It’s paramount that insurance providers expand their knowledge on cybersecurity. The providers that do will be able to take full control over their policies. 



Quote for the day:

“It is more productive to convert an opportunity into results than to solve a problem – which only restores the equilibrium of yesterday.” -- Peter Drucker

Daily Tech Digest - September 09, 2021

How a National Digital Twin could help catapult sustainability in the UK

Digital twins continue to remain an area that is underfunded and underdeveloped in the UK. This is largely due to an awareness issue. Until recently, digital twins have largely sat in the remit of academia and therefore much of the theory hasn’t turned into action. Any innovation that has been brought to the table has mainly remained siloed between organisations and sectors. To counter this requires strong, central guidance on what can be achieved through digital twins. The Government is primed to take on this leading role, particularly the Department for Business, Energy & Industrial Strategy (BEIS). In an ideal scenario, we’d see it set up small scrum teams of digital twin experts to support, educate and consult organisations across the private and public sectors to first, develop business cases and proof of value, and second get them to a place where they can develop their own information management strategy to support the digital twin. This cohesive education will help to underpin a National Digital Twin strategy. Hand-in-hand with the awareness issue, is a lack of digital maturity and understanding on how to get to that point. 


Technical Debt Isn't Technical: What Companies Can Do to Reduce Technical Debt

The biggest problem is that unlike a dirty kitchen, technical debt is mostly invisible to our non-technical stakeholders. They can only see the slowing down effect it has, but when they do, it’s often already too late. It’s all about new features, constantly adding new code on already fragile foundations. Another problem is that too much tech debt causes engineering teams to be in fire-fighting mode. Tech debt impacts the whole company, but for engineers, more tech debt means more bugs, more performance issues, more downtime, slow delivery, lack of predictability in sprints, and therefore less time spent building cool stuff. ... Controlling technical debt is a prerequisite to delivering value regularly, just like an organized and clean kitchen is a prerequisite to delivering delicious food regularly. That doesn’t mean you shouldn’t have technical debt. You will always have some mess and that’s healthy too. The goal isn’t to have zero mess; the goal is to get rid of the mess that slows you down and prevents you from running a great kitchen.


When a scammer calls: 3 strategies to protect customers from call spoofing

Humans are invariably going to be the weakest link in the chain; not even the most robust technology can prevent a victim from unwittingly handing over their private credentials. That said, while many financial institutions are investing in educational programs to teach their customers basic principles around protecting their accounts, they need to make it a continuous and ongoing initiative. Likewise, these efforts should extend to the customer-facing workers and especially contact center employees who are ultimately responsible for authenticating a customer’s identity. ... Phone-based scams almost always culminate with the victim transmitting funds, buying untraceable gift cards, or sharing critical data that can be used to create synthetic identities to open new accounts. For financial institutions this means that they need to be able to establish a behavioral baseline of their customers to understand normal interactions from anomalous activities that could be earmarks for potential fraud threats.


Agile Enterprise Architecture Framework: Enabler for Enterprise Agility

The Agile EA Framework (AEAF) helps in breaking barriers between IT and business, ideally with increasing levels of co-location by unit and with fast forming teams that coalesce for new projects. The initial goal of the architect is to bring out a Minimum Viable Product (MVP), improve upon it, and evolve with each iteration. It would also consider the real time customer feedback while adding more features through the iterations. The overall idea is to adopt just enough architecture that would be sufficiently good to deliver the MVP and thus avoiding any big upfront designs. The AEAF helps in defining an architecture using an iterative life cycle, allowing the architectural design to evolve gradually as the problem and the constraints better understood. The architecture and the gradual building of the system must go hand in hand and the subsequent iterations address the architecture issues and address architecture decisions to arrive a flexible architecture. The following diagram depicts the AEAF framework and constituent steps associated with it.


6 Hobbies You Should Have if You’re Interested in Cybersecurity

Ethical hacking (or "white-hat hacking") occurs when people get permission to try and break into a company’s systems. They then report their methods and how quickly they accomplished the task. Ethical hackers would ideally find problems before malicious parties do, giving companies time to act. Some people specializing in ethical hacking recommend having a wide but shallow knowledge pool. This equips them to find issues in cloud software, and so identify vulnerabilities that help malware flourish. ... Hack the Box is a platform for cybersecurity enthusiasts that combines hacking with gamification. The online modules cater to individuals, universities, and companies, providing content to help people hone their penetration testing skills. Think of Hack the Box as a springboard for people interested in hacking who aren’t sure where to start. Besides offering an educational component, there’s a community aspect. For example, people can discuss their methods and get recommendations for different techniques to apply in the future.


SEC Warns of Fraudulent Cryptocurrency Schemes

Several security and blockchain experts draw a direct line between this fraudulent activity and increasingly sophisticated social engineering attempts, or blatantly false advertising that may lead to poor or unsafe crypto investments. James McQuiggan, education director for the Florida Cyber Alliance and security awareness advocate for the firm KnowBe4, says, "Cybercriminals will always find emotional lures to exploit users through social engineering. Asking yourself the question, 'Is this too good to be true?' is the first step to determine if the organization is worthwhile." Further, Julio Barragan, director of cryptocurrency intelligence at the firm CipherTrace, warns against ongoing scams in which victims are lured by a convincing fraudster sending them direct messages on social media or through a friend's hacked account, promoting massive gains. Neil Jones, cybersecurity evangelist for the firm Egnyte says: "Significant change [in the space] will only occur when cryptocurrency platforms become subject to the same standardized IT requirements as traditional investment platforms ..."


Are you stuck in a “logic box”?

The point of the logic box is to help develop self-awareness, an essential skill of leadership that is becoming more important as we negotiate our VUCA—volatile, uncertain, complex, and ambiguous—world. Leaders and their subordinates must always examine the basic premises of a key decision and interrogate its surface validity. This came up in a recent conversation I had with Dambisa Moyo, a widely published economist who is a board member at Chevron and 3M. One of the most important qualities she looks for when assessing leaders is their ability to use different mental models for analyzing choices, an idea that she attributed to Buffett’s partner at Berkshire Hathaway, Charlie Munger. “It’s this idea of road-testing their thinking using different paradigms,” she said. “So, if, say, an investment looks quite attractive from a financial perspective, it might look less attractive through a geopolitical or environmental lens. Given the world that we live in now, people who think about complex problems in a more versatile way have an advantage.”


Protecting your company from fourth-party risk

Since fourth parties are not generally obligated to share information with partners of their clients, organizations are now adapting their TPRM programs to address fourth-party concerns. Fortunately, there are steps companies can take to give them greater visibility into – and protection from – downstream risk. Despite growing awareness of the threat of fourth-party risk, clear guidelines, and uniform processes for fourth parties have not been established, resulting in disjointed, ad-hoc processes. Most of these processes are manual, requiring significant investment in time and labor, and opening the possibility of error and oversight. ... The first step is for companies to understand how their third parties are monitoring their vendors. This includes direct monitoring (i.e., what are they doing to monitor their third parties) and general vendor management (i.e., do they have their own vendor management program and how effective is it). Companies can ask these questions through periodic performance reviews as well as through their annual risk and due diligence reassessments. 


Putting people at the heart of digital marketing

A strong marketing team is made up of people with a diverse range of skills – from strategists and data analysts to identify strengths and map trends and focus plans, to creatives and ‘doers’ to design and deliver beautifully tailored campaigns. A good marketer needs to understand how technology can help to enhance, personalise and deliver these campaigns through the appropriate channels – but also to be able to think beyond the barriers of what technology can provide. Technology makes it easy to execute, analyse and measure a marketing strategy with the push of a button and while this is helpful – especially at scale – where we see the most effective personalised marketing is in teams with marketers who are not afraid to ask questions. They need to be able to query the ‘why’, ‘how’ and ‘who’ behind every marketing decision – whether technology or human driven – to ensure it is relevant, beneficial and being delivered to the right people in the best possible way. Good marketers know this and understand that if we want customers to continue to agree to share their data, we need to earn their trust.


How to Enable Team Learning and Boost Performance

Very often, a team with a performance problem lacks the knowledge of strategy. They do not feel like doing meaningful work. As a leader, you should have defined a framework within which you regularly communicate goals and connect them with strategy. You also need to be open to collect feedback from your team if they feel the goals are achievable or not. It might be that you have clear goals, but you communicate them once per year. Unfortunately, that might be too rare. Based on your context, you need to define the best cadence to remind the team and yourself about the goals. For teams that are working in compex fast changing environment you need to review the goals at least once per 3 months, maybe even more often. For example, you can schedule release planning or delivery planning sessions with your team. Once per 3 months, review with your team the delivery roadmap, release plans. Compare it with your team's current velocity and capacity. Discuss the expectations, collect feedback from your team. Afterwards use sprint review sessions and sprint planning sessions to track the progress towards the goal. 



Quote for the day:

"A positive attitude will not solve all your problems. But it will annoy enough people to make it worth the effort" -- Herm Albright

Daily Tech Digest - September 07, 2021

Tech jobs are changing. But don't expect a boom in IT salaries just yet

While companies may not be planning large wage incentives for staff, Robert Half found that many were readdressing the benefits packages they offer, with the inclusion of perks such as flexible hours, remote-working options and allowances for home office equipment. Clamp suggests that this focus on the employee experience, rather than substantial pay increases, is what's likely to shape compensation packages in the months ahead. "We think it's part of the employee proposition, and part of the experience that is now pretty common among larger employers, and perhaps smaller ones too -- giving people fulfilment of their work," he says. Meerah Rajavel, CIO at Citrix, agrees. "When it comes to attracting and retaining talent, companies need to look beyond pay," Rajavel tells ZDNet. "Benefits programs should focus on total rewards that support employees in a holistic way, providing not only for their financial security, but their physical, intellectual, social, and environmental well-being." Rajavel points out that pay has always been at a premium in the tech space, but adds that the speed at which the market is currently moving is putting pressure on companies to up the ante.


Becoming a Cybersecurity or Privacy Lawyer: Tips for Young Attorneys

A keen interest in technology is helpful, however, as lawyers in this space need to stay abreast of rapid developments in both the law and the underlying space. And taking some classes in IT can be useful to develop a functional tech vocabulary, as you may often find yourself tasked with translating between IT professionals and business leaders within your client’s organizations. If you are already a practicing lawyer, seek out relevant CLE content from the Pennsylvania Bar Association, Practicing Law Institute, Privacy + Security Forum, or other provider; these providers offer annual seminars that provide valuable crossover between tech and legal content. ... “The cyber field is always evolving, from risk vectors, to newly enacted laws (or courts’ interpretation of them), to techniques employed by threat actors. Privacy also is in a state of continual change and updates. Collaboration and dialogue with your peers is an important component of the practice, and the Committee offers an opportunity for young lawyers to do just that,” says Joshua Mooney


Big Banks Benefiting Most From COVID-19 Digital Shifts

One challenge that smaller financial institutions face is that they have older customer bases, which impacts the penetration of digital banking solutions. But there is more than just an age differential. Even taking age out of the equation the largest banks outperform smaller institutions. For instance, midsize banks were found to lag in several digital product usage metrics, such as: Paying bills via online and mobile; Internal funds transfers via mobile app; Using P2P payments in the mobile app; and Receiving alerts via mobile app. Of greater concern is that consumers who do use either online banking or mobile banking are less satisfied with both the design and functionality of the websites and mobile tools. They also report lower satisfaction with the range of services that can be performed with the mobile apps. Beyond redesigning the online banking website or mobile banking app, organizations should focus on the lowest-hanging fruit for increased engagement. This would include linking P2P payments to one of the many available services.


Your hybrid cloud model is just a phase

Hybrid cloud, however, is not a long-term solution. It forms part of a pathway towards a reality in which the public and private sectors alike will use a fully integrated public cloud such as international providers like Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform or public sovereign cloud providers, which provides a broad set of infrastructure services, such as computing power, storage options, networking, and databases, delivered on-demand. The need for this is more important than ever before, with challenges including governance, data and security threats rapidly rising as key focus areas that organisational personnel and the public need to be educated about. This transitional phase should last between five to ten years. As this process takes place, there is likely to be resistance from those with lingering concerns – such as the governance issue I noted above. Alleviating these concerns will mean zeroing in on the things that will permit organisations and public sector entities to evolve in the way they want.


Urban mining: the hidden value of e-waste

“E-waste is the world’s fastest growing waste stream,” said Fred White, commercial manager at Argo Natural Resources. “Just looking at the market size, it’s quite significant, and the rate of growth is enormous – it’s projected to grow by 40% over the next 10 years. A lot of recycling capacity needs to come online to deal with that growth. “We see it as a big opportunity. Global demand for electronic goods is soaring – how many phones and laptops do you have today, compared to 10-15 years ago? And how long do you keep those phones?” Argo is commercialising Deep Eutectic Solvents (DES), a chemistry that has been under research and development at the University of Leicester for nearly 20 years. DES consists of non-toxic, environmentally benign and chemically stable ionic liquids that can be used to extract a wide range of metals. “DES is a platform chemistry of millions of different combinations of salts and simple organic compounds,” White explained. “They can be combined in certain ways to do a wide variety of things.”


The IOT Technologies Making Industry 4.0 Real

IoT devices need internet connectivity to work. However, even the strongest network is bound to experience overload at some point. No matter how sophisticated technology gets, constantly being connected to a network is a fundamental weakness, especially on an industrial scale. More companies these days favor IoT devices that use intermittent connectivity protocols, as opposed to constant wifi or cellular connections, as a way of overcoming this challenge. The logistics industry provides a great case study for the positives of intermittent connectivity. Traditionally, data logger devices that connect using radio-frequency identification (RFID) transmitters or even USB cables have been used to collect condition and location information on stored and shipped materials. But plugging in all those loggers intermittently is extremely labor intensive, and RFID syncs with unreliable towers that are dependent on expensive proprietary systems. Finnish firm Logmore's dynamic e-ink QR code solution is an example of how to use intermittent connectivity at scale. IoT sensors attached to the tags collect information, which refreshes a QR code on a small display.


IoT Attacks Skyrocket, Doubling in 6 Months

With millions still working from home, cybercriminals are targeting corporate resources via home networks and in-home smart devices too, according to Red Canary’s Grant Oviatt. They know organizations haven’t quite gotten used to the new perimeter — or lack thereof. “Throughout the past 12 months, the lack of [incident] preparedness has become increasingly evident, especially with the influx of personal devices logging onto corporate networks, the resulting reduced endpoint visibility, expanded attack surface and surge in attack vectors,” he said in a recent Infosec Insider column for Threatpost. In real-world attacks, the end result of attacks on IoT gear is evolving, Kaspersky found: Infected devices being used to steal personal or corporate data as mentioned, and mine cryptocurrencies, on top of traditional DDoS attacks in which the devices are added to a botnet. For instance, the Lemon Duck botnet targets victims’ computer resources to mine the Monero virtual currency, and it has self-propagating capabilities and a modular framework that allows it to infect additional systems to become part of the botnet too.


Adoption of Cloud Native Architecture, Part 3: Service Orchestration and Service Mesh

All applications and services include all the non-functional code inside them. There are plenty of disadvantages with this type of design. There is a lot of duplicate implementation and proliferation of the same functionality in each application and service, resulting in longer application development (time to market) and exponentially higher maintenance costs. With all these common functions embedded inside each app and service, all are tightly coupled with specific technologies and frameworks used for each of those functions, for example for Spring Cloud Gateway and Zipkin or Jaeger for routing and tracing respectively. Any upgrades to underlying technologies will require every application and service to be modified, rebuilt, and redeployed, causing downtime and outages for users. Because of these challenges, distributed systems are becoming complex. These applications need to be redesigned and refactored to avoid siloed development and the proliferation of one-off solutions.


How tech is a vital weapon against cyber information warfare

Using data ethically and securely is critically important in a digital age, where growing amounts are being created every day. Doing so is no longer just an optional extra, but a human right all of its own. But too many businesses still have a lax approach to data security, and it’s inadvertently aiding cyber criminal efforts. The long list of fines handed out by the ICO is testament to the fact there isn’t enough being done to protect citizens. While reputational damage and fines can be big deterrents, data breaches are still a regular occurrence. Data protection’s plight relies on businesses taking a proactive stance on this, but once again, technology can step in here and play an important enabling role. Irrespective of your business size, you need to look for modern data protection solutions that factor in data security, compliance and customer privacy requirements from the very start. Read customer testimonials, conduct your own research and look to respected awards bodies to help in that decision, rather than just relying on a vendor’s word that their solutions are secure.


Tailoring SD-WAN to fit your needs

Most SD-WANs simply look at packet types or maybe TCP/UDP port numbers, which assumes that all voice packets or all packets for a particular application have the same priority. In many cases, users prioritize specific worker-to-application relationships, not all users of a given application, so prioritization may offer less value than you think. If you have specific reasons for selecting an SD-WAN that has higher header overhead or one that can’t prioritize as you’d like, you can reduce the impact of both these issues by using access links with higher bandwidth if they’re available. If not, and you need to use access bandwidth efficiently, then take the time to assess your vendor options in light of the overhead and prioritization issues. That also goes for security. If an SD-WAN can recognize specific worker-to-application relationships, it can not only prioritize the important ones, but also recognize which of all the possible worker-to-application relationships are actually permitted. That means that the SD-WAN can actually create better security.



Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones