Showing posts with label SD WAN. Show all posts
Showing posts with label SD WAN. Show all posts

Daily Tech Digest - March 02, 2025


Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr


Weak cyber defenses are exposing critical infrastructure — how enterprises can proactively thwart cunning attackers to protect us all

Weak cybersecurity isn’t merely a corporate issue — it’s a national security risk. The 2021 Colonial Pipeline attack disrupted energy supplies and exposed vulnerabilities in critical industries. Rising geopolitical tensions, especially with China, amplify these risks. Recent breaches attributed to state-sponsored actors have exploited outdated telecommunications equipment and other legacy systems, revealing how complacency in updating technology can put national security in danger. For instance, last year’s hack of U.S. and international telecommunications companies exposed phone lines used by top officials and compromised data from systems for surveillance requests, threatening national security. Weak cybersecurity at these companies risks long-term costs, allowing state-sponsored actors to access sensitive information, influence political decisions and disrupt intelligence efforts. ... No company can face today’s cyber threats on its own. Collaboration between private businesses and government agencies is more than helpful — it’s imperative. Sharing threat intelligence in real-time allows organizations to respond faster and stay ahead of emerging risks. Public-private partnerships can also level the playing field by offering smaller companies access to resources like funding and advanced security tools they might not otherwise afford.


Evaluating the CISO

Delegation skills are an essential component that should be evaluated separately in this area. Effective delegation is essential to prevent becoming a bottleneck, as micromanagement is unsuitable for the CISO role. Delegating complex tasks not only lightens your load but also helps foster the team’s overall competence. Without strong delegation skills, CISOs cannot rate themselves highly in their relationship with the internal security team. ... A CISO is hired to lead, manage, and support specific projects or programs such as migrating to a cloud or hybrid infrastructure, implementing zero-trust principles, launching security awareness initiatives, or assessing risks and creating a roadmap for post-quantum cryptography implementation. The success of these initiatives ultimately falls under the CISO’s responsibility. To execute these programs effectively, the CISO relies heavily on its team and internal organizational peers. As such, building strong relationships with both is essential for successfully delivering projects. ... A CISO must have responsibility for the information security budget, which includes funding for the team, tools, and services. Without direct control over the budget, it becomes challenging to rate the relationship with management highly, as budget ownership is a critical aspect of the CISO’s role.


Unraveling Large Language Model Hallucinations

You might have seen model hallucinations. They are the instances where LLMs generate incorrect, misleading, or entirely fabricated information that appears plausible. These hallucinations happen because LLMs do not “know” facts in the way humans do; instead, they predict words based on patterns in their training data. ... Supervised Fine-Tuning makes the model capable. However, even a well-trained model can generate misleading, biased, or unhelpful responses. Therefore, Reinforcement Learning with Human Feedback is required to align it with human expectations. We start with the assistant model, trained by SFT. For a given prompt we generate multiple model outputs. Human labelers rank or score multiple model outputs based on quality, safety, and alignment with human preferences. We use these data to train a whole separate neural network that we call a reward model. The reward model imitates human scores. It is a simulator of human preferences. It is a completely separate neural network, probably with a transformer architecture, but it is not a language model in the sense that it generates diverse language. It’s just a scoring model.


How to Communicate the Business Value of Master Data Management

In an ideal scenario, MDM is integral to a broader D&A strategy, highlighting how D&A supports the organization's strategic goals. The strategy aligns with these goals, prioritizes the business outcomes it will support, and details what is needed to achieve them. Therefore, leaders must first understand and prioritize the explicit business outcomes that MDM will support before creating an MDM strategy. In other words, "improving decision-making" is not good enough. "Increase customer service levels by 5% by end of December 2025" is the level of detail required. D&A leaders may recognize that master data is causing a problem or limiting an opportunity, which is where they would rely on an MDM. If this is the case, those D&A leaders should consider questions that help identify the problem, KPIs, and key stakeholders in these cases. These questions help identify potential business outcomes that MDM could support. Figure 1 provides a worksheet to build this initial picture and facilitate stakeholder discussions. The worksheet maps high-level goals onto a run-grow-transform framework, which could also be represented by three columns for the primary business value drivers: risk, revenue, and cost.


4 ways to get your business ready for the agentic AI revolution

Agents could be used eventually, but only once a partnership approach identifies the right opportunities. "Agents are becoming a big part of how generative AI and machine learning are used in business today. The way agents will be used in travel will be fascinating to watch. I think this technology will certainly be a part of the mix," he said. "The process for Hyatt will be to find the right technologies -- and we'll do that in close partnership with our business leaders and the technology teams that run the applications. We'll then provide the AI services to drive those transitions for the business." ... Keith Woolley, chief digital and information officer at the University of Bristol, is another digital leader who sees the potential benefits of agents. However, he said these advantages will become manifest over the longer term. "We are looking at agentic AI, but we're not implementing it yet," he said. "We sit as a management team and ask questions like, 'Should we do our admissions process using agentic AI? What would be the advantage?'" Woolley told ZDNET he could envision a situation in which AI and automation help assess and inform candidates worldwide about the status of their applications.


Cloud Giants Collaborate on New Kubernetes Resource Management Tool

The core innovation of kro is the introduction of the ResourceGraphDefinition custom resource. kro encapsulates a Kubernetes deployment and its dependencies into a single API, enabling custom end-user interfaces that expose only the parameters applicable to a non-platform engineer. This masking hides the complexity of API endpoints for Kubernetes and cloud providers that are not useful in a deployment context. ... Kro works seamlessly with the existing cloud provider Kubernetes extensions that are available to manage cloud resources from Kubernetes. These are AWS Controllers for Kubernetes (ACK), Google's Config Connector (KCC), and Azure Service Operator (ASO). kro enables standardised, reusable service templates that promote consistency across different projects and environments, with the benefit of being entirely Kubernetes-native. It is still in the early stages of development. "As an early-stage project, kro is not yet ready for production use, but we still encourage you to test it out in your own Kubernetes development environments," the post states. ... Most significantly for the Crossplane community, Farcic questioned kro's purpose given its functional overlap with existing tools. "kro is serving more or less the same function as other tools created a while ago without any compelling improvement," he observed. 


Why a different approach to AIOps is needed for SD-WAN

AIOps tools enhance efficiency by seamlessly integrating with IT management tools, enabling proactive issue identification and streamlining IT management processes. But more than that, they optimize an organization’s network by improving the performance, efficiency, and dependability of its network resources to ensure optimal user experience. Regarding infrastructure, many organizations now rely on SD-WAN – software-defined wide area network – to manage and optimize data traffic across different types of networks efficiently. SD-WAN is an effective way to connect the organization and provide users with application access. It helps businesses improve their network performance, cut costs, and be more flexible by easily connecting to various network types. ... AIOps tools use the information extracted from SD-WAN systems and autonomously resolve issues without human intervention. In other words, AIOps tools utilize predictive analytics to forecast future events or outcomes related to network operations. This makes the whole system run smoother and more reliably, while machine learning algorithms can use this historical data to make predictions and proactively improve the performance of critical applications.


AI-Driven Threat Detection and the Need for Precision

AI algorithms, particularly those based on machine learning, excel at sifting through massive datasets and identifying patterns that would be nearly impossible for us mere humans to spot. An AI system might analyze network traffic patterns to identify unusual data flows that could indicate a data exfiltration attempt. Alternatively, it could scan email attachments for malicious code that traditional antivirus software might miss. Ultimately, AI feeds on context and content. The effectiveness of these systems in protecting your security posture(link is external) is inextricably linked to the quality of the data they are trained on and the precision of their algorithms. ... Finally, AI-driven threat detection may not eradicate human expertise. Skilled security professionals should still oversee AI systems and make informed decisions based on their own contextual expertise and experience. Human oversight validates the AI's findings, and threat detection algorithms may not be able to totally replace the critical thinking and intuition of human analysts. There may come a time when human professionals exist in AI's shadow. Yet, at this time, combining the power of AI with human knowledge and a commitment to continuous learning can form the building blocks for a sophisticated defense program. 


From Ambiguity to Accountability: Analyzing Recommender System Audits under the DSA

In these early years of the DSA, a range of stakeholders – online platforms, civil society, the European Commission (EC), and national Digital Service Coordinators (DSCs) – must experiment, identify good practices, and share lessons learned. Such iteration is important to ensure an adaptive DSA regime that spurs innovation and responds to shifting technologies, risks, and mitigation strategies. The need for iteration and flexibility, however, should not mean the audits fail to deliver on their potential as vehicles for transparency and accountability. The first round of independent audits of recommender systems reveals clear areas for immediate improvement. Because the core definitions and methodologies were developed independently by platforms and auditors, significant inconsistencies exist in both risk assessment and audit processes. ... The DSA requires the main parameters of recommender systems to be spelled out in plain and intelligible language. What does this concretely mean in the recommender system context? Is it free of “acronyms or complex/technical terminology” (Pinterest), “straightforward vocabulary and easy to perceive, understand, or interpret” (Snap), or “written for a general audience with varying technical skill levels, inclusive of all users” (TikTok)? There's a subtle difference in expectations associated with each framing. These terms don’t need to be defined in a vacuum.


Cybersecurity in retail: What does the future hold?

In the coming year, cybersecurity experts predict attackers will increasingly target Generative AI models used by retailers, creating significant potential for operational disruptions and data breaches. These AI systems, now critical to retail operations, are vulnerable to sophisticated attacks that could compromise customer service efficiency and expose critical business vulnerabilities. The core risk lies in the sophisticated ways attackers can exploit AI’s complex decision-making processes, turning what was once a technological advantage into a potential security liability. Retailers must recognise that their AI systems are not just technological tools, but potential entry points for cybercriminal activities. ... The complexity and distribution of digital ecosystems make them prime targets during high-demand periods. For example, as we have seen in the past, cyberattacks that hit supply chains can cause major delays and financial loss. These incidents underscore the vulnerabilities in supply chains during peak times of the year​. In 2025, expect a rise in supply chain attacks during the holiday season, targeting ecommerce platforms and logistics providers, which could disrupt product availability and shipping.

Daily Tech Digest - July 11, 2023

Multiple SD-WAN vendors can complicate move to SASE

The walls between networking and security teams must come down to deliver cloud-based security and network services across today’s sophisticated networks. “The opportunity to leverage a cloud-based architecture to enforce security policies to distributed locations and remote workers is the real value of SASE. It offers management efficiencies, it supports a modern workforce, and it supports an important integration between the network and security teams,” IDC’S Butler says. “In today’s world, when you have so many people working from home and so many distributed applications, a cloud-based security approach is really appealing.” As the market continues to evolve, vendors are boosting their capabilities – networking vendors are acquiring or developing security capabilities to offer SASE, and security providers are augmenting their product portfolios with advanced networking capabilities to offer SASE. That aligns with adoption trends; a majority (68%) of 830 respondents to an IDC survey said they would like to use the same vendor for their SD-WAN and security/SASE solution.


Decoding AI: Insights and Implications for InfoSec

AI is wonderfully adept at narrow tasks, but it is clueless beyond its specific training. It’s like a super-specialist who can thread a needle blindfolded but can’t understand why it shouldn’t sew its own fingers together. Say we task an AI with making a company network as secure as possible. It might suggest shutting down the network, preventing user access or even blocking external dataflows because, hey, it’s technically efficient! ... AI could reshape the world of cybersecurity in unimaginable ways, making our lives easier and more efficient. However, it is essential to bear in mind that AI, despite its remarkable abilities, is essentially a tool. It lacks the human touch—our capacity for intuition, empathy and understanding that extends beyond the data. AI will undoubtedly keep improving, but it is on us to guide its evolution in a way that respects our shared humanity and safeguards our values. So, the next time you see a headline touting the latest AI breakthrough, take a moment to appreciate the amazing technology—but remember that it’s not quite as “intelligent” as it might seem.


Sarah Silverman sues OpenAI, Meta over copyright infringement in AI training

The suits, filed last week in federal district court in San Francisco, argued that Microsoft-backed OpenAI and Meta didn’t have permission to use copyright works by Silverman and two other authors, Christopher Golden and Richard Kadrey, when it used them to train ChatGPT and Meta's LLaMA (Large Language Model Meta AI). It asks for injunctions against the companies to prevent them from continuing similar practices, as well as unspecified monetary damages. The heart of the lawsuit, according to the complaint, is OpenAI’s use of a data set called BookCorpus, which it said was created in 2015 for the purpose of large language model training. Much of BookCorpus, the plaintiffs say, was copied from a site called Smashwords, a host for self-published novels, which were under copyright. Additionally, the complaint alleges that there is no way that the book-based data sets used to train OpenAI came entirely from legal sources, as no legal databases offer enough content to account for the size of the “Books1” and “Books2” sets.


Law firms under cyberattack

As the UK National Cyber Security Centre (NCSC) noted in a recent report focusing on cyber threats to the legal sector, law firms handle sensitive client information that cybercriminals may find useful, including exploiting opportunities for insider trading, gaining the upper hand in negotiations and litigation, or subverting the course of justice. The potential consequences of such breaches can be severe, as the disruption of business operations can incur substantial costs. Ransomware gangs specifically target law firms to extort money in exchange for allowing the restoration of business operations. In 2020, the Solicitors Regulation Authority (SRA) published a cybersecurity review revealing that 30 out of 40 of the law firms they visited have been victims of a cyberattack. In the remaining ten, cybercriminals have directly targeted their clients through legal transactions. “While not all incidents culminated in a financial loss for clients, 23 of the 30 cases in which firms were directly targeted saw a total of more than £4m [$5m+] of client money stolen,” the SRA noted.


7 IT consultant tricks CIOs should never fall for

Making a business case - Consultants love this one. It’s where the CIO engages them to build the business case for a pet project or priority — not to determine whether there’s even a business case to be made. To make one, starting with the predetermined answer and working backward from there, employing such questionable practices as cherry-picked data, one-sided analyses, inappropriate statistical tests, and selective anecdotes to name a few, defining and justifying a strategic program whose success depends on … surprise! … a major engagement for the consultant’s employer. ... Win, then hire - This is less common for delivery teams than the consultants whose work resulted in the win that created the need for the delivery team, but still … Few consultancies keep a bench of any size. As a result, winning an engagement is often far more stressful than losing one, because after winning an engagement the consultancy has no more than a month or so to hire the staff needed to execute the engagement, familiarize the newly hired staff with the methodology and practices the engagement calls for, and build a working relationship with their new managers.


Why Qubit Connectivity Matters

Of course, high-connectivity architectures are not without disadvantages. High connectivity relies on the ability to shuttle qubits around, and shuttling qubits carries several potential issues. Shuttling qubits can be a relatively slow process compared to the speed of quantum gate operations. This can increase the total computation time and reduce the number of operations that can be performed before the qubits lose coherence. The process of moving qubits introduces the risk of decoherence, which is the loss of the quantum state due to interaction with the environment. Shuttling qubits also adds an extra layer of complexity to the design of the computer, and this can be challenging to implement, especially in a large-scale system. In summary, qubit connectivity plays a vital role in the performance and functionality of quantum computers. It impacts the implementation of quantum algorithms, the creation of quantum entanglement, error correction, and the overall scalability, speed, and efficiency of quantum computing systems. When one considers the quantum modality of choice for their application, qubit connectivity should be one of the factors taken under consideration.


Analysts: Cybersecurity Funding Set for Rebound

A lot of the optimism has to do with enterprises continuing to invest heavily in cybersecurity, despite a slowdown in other expenditures. Market research firm IDC expects that organizations will spend some $219 billion this year on security products and services — or some 13% more than they did in 2022 — to address threats, to support hybrid work environments, and to meet compliance requirements. The areas that will receive the most spending are managed security services, endpoint security, network security, and identity and access management. "While the theme of conservatism and expectations for continued headwinds have remained throughout the first half of the year, we do expect to see strategic activity slowly begin to rebound in the second half of 2023 and into 2024," says Eric McAlpine, founder and managing partner of analyst firm Momentum Cyber. Financing and M&A activity will both eventually pick up as companies that were able to make do financially so far begin to feel the need for fresh capital to fuel their business, he says.


Why Enterprises Should Merge Private 5G With Programmable Communications

5G private networks provide an opportunity to integrate the application and the network so that the two can inform one another, allowing adjustments to be made in real time. Businesses not only have an improved network with a private cellular network, but they can also sync their applications with the network’s performance, enabling multiple tasks to be completed based on network performance at a specific moment. ... A new generation of digital engagement providers is looking at how these communication platforms evolve into platforms that integrate across a range of business processes. They are not only leveraging robust voice, video and messaging solutions but also introducing fully programmable computer vision and audio analytics solutions. This combination of communications and AI-based media analytics and programmability makes this evolved communications platform an ideal and unexpected solution to Industry 4.0 business needs. New communication platforms are focused less on meeting one business need but rather on the integration of communications to evolve and inform applications, making adjustments and building cost-effective efficiencies.


5 ways to prepare a new cybersecurity team for a crisis

Not all security incidents cause an enterprise-level crisis, and not all crises are cyber-related. Natural disasters, product recalls, accidents, and public relations debacles are all examples of non-cyber events that could have a significant negative impact on an organization. So, in preparing a new cybersecurity team for a crisis, it is important to define and rank--first, by severity and then by likelihood--what precisely the business would define as a security “crisis,” says John Pescatore director of emerging security trends at the SANS Institute. “It is not the case that the top of the list will always be something like ransomware,” Pescatore says. Sometimes, a crisis might have nothing to do with cybersecurity, he notes. “For example, I remember hearing a Boston-area hospital CIO talk about how they were bombarded with attempts to get into hospital data after the [Boston Marathon] bombing because press reports had noted the bombers went to that hospital.” Once the cybersecurity team has an understanding of what would constitute a security crisis for the company, create playbooks for the top handful of them.


Writing your company’s own ChatGPT policy

To help employees grasp and embrace key basics quickly, one useful starting point can be signposting relevant parts of existing policies they can check for best practices. Producing tailored guidance for an internal ChatGPT policy is slightly more complex. To develop a truly all-encompassing ChatGPT policy, companies will likely need to run extensive cross-business workshops and individual surveys which enable them to identify, and discuss, every use case. Putting in this groundwork, however, will allow them to build specific directions which ultimately ensure better protection, as well as giving workers the comprehensive knowledge required to make the most of advanced tech. ... Explicitly highlighting threats and setting unambiguous usage limitations is also just as critical to leave no room for accidental misuse. This is particularly important for businesses where generative AI may be deployed to streamline tasks that involve some level of PII, such as drafting client contracts, writing emails, or suggesting which code snippets to use in programming.



Quote for the day:

"Learning is a lifetime process, but there comes a time when we must stop adding and start updating." -- Robert Brault

Daily Tech Digest - November 21, 2022

Achieve Defense-in-Depth in Multi-Cloud Environments

Many organizations are adopting log-based solutions (from endpoint to perimeter security), which is a good first step, but logs can be bypassed or disabled. Even worse, hackers can manipulate logs to give the appearance that “everything is fine,” when in fact, they are moving between users, resources and exfiltration. The solution to this problem is to normalize visibility across the locations where your organization’s data lives – from the cloud to on-prem, and data centers. Knowing that IT and Security teams rely on logs makes them attractive targets for hackers today. However, taking a defense-in-depth approach versus logs alone is now critical to ensuring that every single entry point to your organization is secure. Network intelligence plays a huge role in gaining visibility – it is the only way to ensure visibility into all of the data in motion across your entire infrastructure and prevent risks. ... Just like cloud infrastructure management is a shared responsibility within the organization, so must enterprise security including data security be a shared responsibility. 


A Serverless-First Mindset in an Evolving Landscape

A serverless-first mindset is no doubt beneficial in a number of ways, but some businesses may have reservations in terms of the potential for vendor lock-in, the security offered by the cloud provider, existing sunk costs and other issues in debugging and development environments. However, even among the most serverless-adverse, this mindset can provide benefits to a select part of an organisation. When looking at a bank’s operations for example, the continued uptime of the underlying network infrastructure is crucial for database access, and with a serverless-first mindset, employees have the flexibility to develop consumer-facing apps and other solutions as consumer demand increases. While the maintenance of a traditional network infrastructure is crucial for uptime of the underlying database, with a serverless approach they have the freedom to implement an agile mindset with consumer-facing apps and technologies as demand grows. Agile and serverless strategies typically go hand-in-hand, and both can encourage quick development, modification and adaptation.


IT talent: The 3 C's for life/work balance

Compensation and benefits are not just lifestyle issues. Although these have virtually nothing to do with how much we enjoy our time at work or how far and fast we advance our careers, they carry a lot of psychological value in our culture because they feed ego and self-esteem. Few people who love their job, have great career prospects, work for a wonderful boss, and have a short commute will move simply for the money. Conversely, many are looking to leave high-paying jobs because their boss is a jerk, the commute is too long, or their skills are outdated. Many candidates initially cite compensation as their top criterion to make a move. Still, I have yet to meet a candidate who would accept a position sight unseen without knowing specific details of the job’s other C's. Big money or great benefits have never made a bad job good. Compensation comes to mind first because it is tangible, measurable, and has psychological power, but underlying its number-one ranking is the assumption that all the other criteria are met. Like everything else, compensation and benefits for a specific role are determined by an ever-changing marketplace.


Extortion Economics: Ransomware's New Business Model

This industrialization of cybercrime has created specialized roles in the RaaS economy. When companies experience a breach, multiple cybercriminals are often involved at different stages of the intrusion. These threat actors can gain access by purchasing RaaS kits off the Dark Web, consisting of customer service support, bundled offers, user reviews, forums, and other features. Ransomware attacks are customized based on target network configurations, even if the ransomware payload is the same. They can take the form of data exfiltration and other impacts. Because of the interconnected nature of the cybercriminal economy, seemingly unrelated intrusions can build upon each other. For example, infostealer malware steals passwords and cookies. These attacks are often viewed as less serious, but cybercriminals can sell these passwords to enable other, more devastating attacks. However, these attacks follow a common template. First comes initial access via malware infection or exploitation of a vulnerability. Then credential theft is used to elevate privileges and move laterally.


7 Microservice Design Patterns To Use

Saga pattern - This microservice design pattern provides transaction management using a sequence of local transactions. Each operation part of a saga guarantees that all operations are complete, or that the corresponding compensation transactions are run to undo the previously done work. Furthermore, in Saga, a compensating transaction should be retriable and idempotent. The two principles ensure that transactions can be managed without manual intervention. The pattern is also a way of managing data consistency across microservices in distributed transaction instances. ... Event Sourcing - Event sourcing defines an approach to handling data operations driven by a sequence of events, each of which is recorded in an append-only store. The app code sends a series of events that describe every action that happened on the data to the event store. Typically, the event store publishes these events so consumers can be notified and handle them if required. For instance, consumers could initiate tasks that apply the events operations to other systems or do any other action associated needed to complete an operation. 


Enterprises embrace SD-WAN but miss benefits of integrated approach to security

When asked to list the challenges they faced when taking a do-it-yourself (DIY) approach to SD-WAN, respondents cited difficulties related to hiring and retaining a skilled in-house workforce, keeping up with technology developments and the ability to negotiate favourable terms with technology vendors. “Now that SD-WAN has matured and has been widely adopted, the complexity of deployments has grown, challenging enterprises on multiple fronts and compromising their ability to realise the full benefits of the technology,” said James Eibisch, research director, European infrastructure and telecoms, at IDC, commenting on the study. “Enterprises are increasingly reliant on the resources and expertise of a managed service provider to ensure they deploy SD-WAN in a way best suited to their meet their organisations’ objectives. Security approaches like secure access service edge (SASE) that combine the benefits of SD-WAN with zero-trust network access and content filtering features are well poised to dominate the next phase of SD-WAN enhancements as enterprises continue to enable the cloud IT model and a hybrid workforce,” he added.


Quantum computing: Should it be on IT’s strategic roadmap?

Quantum computing is a nascent field. Few companies are planning to purchase quantum computers, but there are companies that are starting to use them for competitive advantage. For this reason alone, quantum computing should have a place on IT strategic roadmaps. Financial services institutions like banks and brokerage houses are beginning to experiment with quantum computing as a way to process large volumes of financial transactions quicker. Quantum computing can also be used for financial risk analysis, as financial services companies are using quantum computing for fraud detection. Quantum computing can be used to determine worldwide supply chain risks such as weather, strikes and political unrest, with an eye toward eliminating supply chain bottlenecks before they happen. Pharmaceutical companies are experimenting with quantum computing as a way to assess the viability of new drug combinations and their beneficial and adverse effects on humans. The goal is to reduce R&D costs and speed new products to market. They are also to customize drugs to each individual patient’s situation.


Big Tech Layoffs: A Flood of Talent vs the Hiring Crisis

There has been a sea change in the prospects certain big tech players anticipated would continue to buoy their sector. Sachin Gupta, CEO of HackerEarth, says many big tech and social media platforms saw explosive growth when the pandemic changed spending patterns and drove moves to work remotely and conduct more activities online. “What the businesses started thinking was this was going to last forever, which is very natural,” he says. It is very difficult to be in the midst of such a wave, he says, and then predict that it would not continue. The reasons behind the recent layoffs and firings differ, of course. Meta’s troubles include not seeing expected traction -- such as its exploration of the metaverse. Meanwhile, Twitter is in the throws of a regime change that has been acrimonious for at least some of the rank and file of the company, which has seen sweeping layoffs, resignations, and outright firings of personnel new CEO Elon Musk no longer wanted to darken the company’s door -- office doors that Musk abruptly ordered to be shut (temporarily) and locked last week even to remaining employees.


Creating an SRE Practice: Why and How

The most important first step is to adopt the SRE philosophies mentioned in the previous section. The one that will likely have the fastest payoff is to strive to eliminate toil. CI/CD can do this very well, so it is a good starting point. If you don't have a robust monitoring or observability system, that should also be a priority so that firefighting for your team is easier. ... You can't boil the ocean. Everyone will not magically become SREs overnight. What you can do is provide resources to your team (some are listed at the end of this article) and set clear expectations and a clear roadmap to how you will go from your current state to your desired state. A good way to start this process is to consider migrating your legacy monitoring to observability. For most organizations, this involves instrumenting their applications to emit metrics, traces, and logs to a centralized system that can use AI to identify root causes and pinpoint issues faster. The recommended approach to instrument applications is using OpenTelemetry, a CNCFsupported open-source project that ensures you retain ownership of your data and that your team learns transferable skills.


The Challenge of Cognitive Load in Platform Engineering

You must never forget that you are building products designed to delight their customers - your product development teams. Anything that prevents developers from smoothly using your platform, whether a flaw in API usability or a gap in documentation, is a threat to the successful realisation of the business value of the platform. With this lens of cognitive load theory, delight becomes a means of qualifying the cognitive burden the platform is removing from the development teams and their work to accomplish their tasks. The main focus of the platform team, as described by Kennedy, is "on providing “developer delight” whilst avoiding technical bloat and not falling into the trap of building a platform that doesn’t meet developer needs and is not adopted." She continues by noting the importance of paved paths, also known as Golden Paths: By offering Golden Paths to developers, platform teams can encourage them to use the services and tools that are preferred by the business. 



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg

Daily Tech Digest - October 17, 2022

Get ready for the metaverse

“The metaverse presents an opportunity to more fully transcend our physical limitations,” says Anand Srivatsa, CEO of Tobii. “Technologies like eye tracking will play a critical role in helping reduce the need for compute and networking power, which are required to deliver lifelike, immersive virtual environments. Eye tracking will also help users express their attention and intent in more realistic ways when they’re in the digital universe.” ... If human-digital devices enable the experience, and infrastructure supports metaverse-scale interactivity, then it’s how real the experience feels to users that will be the primary innovation and differentiator. To start, organizations will need strong dataops capabilities, and machine learning models will likely require synthetic data generation. Zuk continues, “Businesses looking to make waves in the metaverse usually begin by establishing a robust data pipeline—with synthetic data as the primary resource driving the development life cycle.” Bart Schouw, chief evangelist at Software AG, agrees.


Cybercriminals are having it easy with phishing-as-a-service

Phishing-as-a-service is a fairly new phenomenon, this trend is where the cybercriminal actually takes the role of a service provider, carrying out attacks for others instead of just for themselves in exchange for a sum of money. PaaS only serves to show how hackers are becoming better organized and looking for greater monetisation from ransomware. Instead of threat actors being required to have technical knowledge of building or taking over infrastructure to host a phishing kit (login page emulating known login interfaces like Facebook/Amazon/Netflix/OWA), the barrier to entry is significantly lowered with the introduction of PaaS. ... Phishing-as-a-service can be very advanced, with capabilities spanning from detecting sandbox environments, to fingerprinting user agents in order to determine whether you might be a researchers bot. That being said, Web Content Filters can often limit the exposure of users.


Top 5 Data Science Trends That Will Dominate 2023

Automation plays a significant role in transforming the world. It has stimulated various transformations in business, resulting in sustained proficiency. In the past few years, the best automation capabilities have been provided by the industrialisation of big data analytics. The process of Analytic Process Automation (APA) encourages growth by providing prescriptive and predictive abilities along with other insights to businesses. Through this, businesses have been able to receive excellence with efficient results and low costs. Analytic Process Automation mainly enhances computing power to make good right decisions. Data analytics automation can be considered a perfect disruptive force. Big data analysis helps substantially with stimulating valuable data usage and productivity. ... Data Governance handles data access all over the world. General Data Protection Regulation (GDPR) compliance has various organizations and businesses that prioritize data governance and handles the data of consumers.


Code Red: the Business Impact of Code Quality

The main problem with technical debt is that code lacks visibility. Code is an abstract concept that isn’t accessible to all members of your organization. Hence, it’s easy to ignore technical debt even if we are aware of the general problem. Quantifying and visualizing the situation in your codebase is key, both for the engineering teams as well as for product and management. Visualisations are wonderful as they let us tap into the most powerful pattern detector that we have in the known universe: the human brain. I explored the concept at depth in Your Code as a Crime Scene, and founded CodeScene back in 2015 to make the techniques available to a general audience. ... With code health and hotspots covered, we have everything we need for taking it full circle. Without a quantifiable business impact, it’s hard to make the case for investing in technical debt paydowns. Any measures we use risk being dismissed as vanity metrics while the code continues to deteriorate. We don’t want that to happen.
Those at the cutting edge of ML are increasingly turning to synthetic data to circumvent the numerous constraints of original or real-world data. For instance, company Synthesis AI offers a cloud-based generation platform that delivers millions of perfectly labeled and diverse images of artificial people. Synthesis AI has been able to accomplish many challenges that come with the messy reality of original data. For a start, the company makes the data cheaper. ... The challenges of real-world data don’t end there. In some fields, huge historical bias pollutes data sets. This is how we end up with global tech behemoths running into hot water because their algorithms don’t recognize black faces properly. Even now, with ML technology experts acutely aware of the bias issue, it can be challenging to collate a real-world dataset entirely free of bias. Even if a real-world dataset can account for all of the above challenges, which in reality is hard to imagine, data models need to be improved and tweaked constantly to stay unbiased and avoid degradation over time. That means a constant need for fresh data.


Improve Developer Experience to Prevent Burnout

It’s obvious that a poor developer experience creates a negative impact throughout an entire company. If developers aren’t producing good work due to unhappiness, illness or burnout, it’s likely that organizations aren’t staying at the cutting edge or offering competitive products in the market. A demoralized team can have a really negative business impact, and it can even change the way that people outside the company feel about it. An unhappy team isn’t going to lead to much creativity or productivity. As a way to combat this growing trend, companies are looking left and right for solutions. Some companies are reaching for things like extra PTO days, a full month off, better benefits, pay raises, and more fun work culture or relaxed dress codes. Those things are nice to have, and we’re certainly not speaking ill of any organization trying something new to help their employees. But at the end of the day, if the overwork and unrealistic expectations remain, the developer burnout will remain too.


Top skill-building resources and advice for CISOs

Ultimately, the hiring organisations will define what it needs in terms of cybersecurity to find the right person. In finance and insurance, for example, there will be specific rules that must be followed in different countries and cybersecurity leaders in such organisations may even be liable. In telecommunications, the skills required are likely to be more technical, whereas in government knowledge around governance and risk are top of the list. “For instance, a smaller organisation which is a greenfield site, or a large multinational where there is already an established security function require different sets of skills and approaches,” Joseph Head, director technical security at Intaso tells CSO. “There are a few commonalities between all CISO roles, however: an understanding of risk and risk appetite — in other words, an understanding of the business, and how much risk it can carry. This dictates how much work a CISO must do, and therefore available budget. Unlocking that budget can only be done by communicating effectively.”


Startup promises SD-WAN service with MPLS reliability, less complexity

Graphiant says that what makes its service different from SD-WAN offerings is how its Stateless Graphiant Core handles WAN data and control planes. The company says many large enterprises have been unwilling to give up the SLAs that come with MPLS for mission-critical traffic. Thus, SD-WAN augments the MPLS network for lower-priority traffic, and the network team must manage two different networks. The operational and administrative overhead of the combined solution, along with the complexity of overlays, tunnels, and policy management means that many enterprises are turning back to MPLS providers that offer their own SD-WAN or that resell others’. That way, the enterprises can order to relieve the burden of managing a complicated managed service themselves. “Enterprise networks have transitioned from predictable topologies to unpredictable ones,” Raza says. He argues that cloud services, IoT, work from home, and a range of other pressures have pushed the MPLS-plus-SD-WAN formula to its breaking point.


High-trust workplace meets no-trust network security

Clearly, the traditional model for IT security is no longer fit for this newly-dispersed world of work and a fresh model is needed — one where the unit of control is identity and where identity is the basis of a system of authorisation and authentication for every device, service and user on your network. Welcome to zero trust, a system which works on the assumption that identity needs to be authenticated and authorised. Given the shift to high-trust digital working environments and the surge in attacks, interest in zero trust is growing. According to Gartner, 40 percent of remote access will be conducted using a zero trust model by 2024 — up from five percent in 2020. Remote work is driving uptake, with zero trust seen as a fast way to achieve security and compliance, according to a Microsoft report on its adoption. Zero trust is implemented through consistent tools, workflows and processes delivered as a set of shared, centrally-managed and automated services. What does this look like? It means codifying policies and procedures for authorisation and access across the technology stacks, domains and service providers that comprise the IT infrastructure.


IT leadership: How to defeat burnout

What sets Liberty Mutual apart from other organizations is our purpose. We exist to help people embrace today and confidently pursue tomorrow. This is our North Star and helps define and guide everything we do. We also understand that combating burnout requires connecting work to outcome. To ensure that this happens, we spend time defining targeted outcomes – the realization of the expected benefit – versus output – for example, simply turning on a new feature in a system. Success is measured by producing results and realizing benefits. Outcome might be the ability to deploy capabilities faster than before, for example. The key word is ‘capabilities,’ which help us deliver better products and services to customers. An outcome is much bigger than an output such as simply turning on a technology. These nuances matter in the context of burnout. If you’re working on a project and you don’t know why you’re doing it or what the intended results are, you’re not connected to why it matters.



Quote for the day:

"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown

Daily Tech Digest - September 22, 2022

MFA Fatigue: Hackers’ new favorite tactic in high-profile breaches

When an organization's multi-factor authentication is configured to use 'push' notifications, the employee sees a prompt on their mobile device when someone tries to log in with their credentials. These MFA push notifications ask the user to verify the login attempt and will show where the login is being attempted. An MFA Fatigue attack is when a threat actor runs a script that attempts to log in with stolen credentials over and over, causing what feels like an endless stream of MFA push requests to be sent to the account's owner's mobile device. The goal is to keep this up, day and night, to break down the target's cybersecurity posture and inflict a sense of "fatigue" regarding these MFA prompts. ... Ultimately, the targets get so overwhelmed that they accidentally click on the 'Approve' button or simply accept the MFA request to stop the deluge of notifications they were receiving on their phone. This type of social engineer technique has proven to be very successful by the Lapsus$ and Yanluowang threat actors when breaching large and well-known organizations, such as Microsoft, Cisco, and now Uber.


Forget digital transformation: data transformation is what you need

One of the most critical aspects of digital transformation is understanding how your organisation leverages data. Once you know how your organisation uses data, you can work on optimising data usage and applying analytics and insights to drive better business outcomes. If you don’t have a data strategy in place, your organisation will likely struggle with leveraging data for digital transformation efforts. Without a data strategy, it isn’t easy to know where your data is coming from, what type of data you have, and what you plan to do with it. Having a data strategy in place will help you determine where your data is coming from, what type of data you have, and what you plan to do with it, thus allowing you to create a plan for leveraging data for digital transformation efforts. If you want to leverage data for your digital transformation efforts, you should do a few things. First, you need to understand your data. This means assessing your data sources and determining what type of data you currently access. You also need to decide which data sources you need and where you can find them.


The human touch

Combining human and machine capabilities can create a sharper focus to how we view the world around us. So how do you square the two? How do you choose between humans, who excel at their understanding of context and nuance but cannot make consistent decisions, and automated processes, which are far better at being objective but don’t understand the decisions they’re making? The answer comes in recognizing that, while humans and machines are flawed, they are flawed in different ways. When it comes to combining them, you could start, naively, by thinking about the technology first, and expect human operators to fill in the gaps of what the system can’t yet do. Or (better) you can do things the other way around. The contrast between the technology-first and human-first approaches is well illustrated by the development of driverless cars in the last few years. Humans aren’t very good at paying attention for long periods of time, and driverless cars with human monitors have struggled to live up to their early promise. Meanwhile, collision avoidance systems – which largely use much of the same technology – are a good example of building a system around the human


There’s one thing that makes employees want to return to the office, says a new Microsoft report

Microsoft’s study found that 84% of people would be motivated to come into work more frequently by the promise of being able to enhance connections with coworkers. But most bosses are trying to use corporate policies to force them back, rather than using those human connections as leverage. “It turns out that in person connections with the person that [you] work with are the biggest draw,” says Spataro. “They’re bigger than tacos. The idea that I can actually connect with my coworker really, really matters.” Workers are demanding flexibility, which is how the hybrid work week has come into vogue. But Spataro says he thinks, ultimately, the workplace will be looking like the office we know from the pre-pandemic days, but with a lot more flexibility. ... Workers are demanding flexibility, which is how the hybrid work week has come into vogue. But Spataro says he thinks, ultimately, the workplace will be looking like the office we know from the pre-pandemic days, but with a lot more flexibility.


Planning the journey from SD-WAN to SASE

Today, organizations are working toward creating a more robust framework of integrated security and networking technologies referred to as Secure Access Service Edge (SASE). This is essentially a combination of SD-WAN and other networking technologies and security services, with the latter now referred to as security service edge (SSE). SSE encompasses a number of security functions to provide the requisite levels of secure connectivity with functionality such as zero-trust network access (ZTNA), data loss prevention (DLP), cloud access security brokers and more. Moving forward, network and security vendors are working to deliver tighter integration with third parties or provide a fully integrated product with both SD-WAN and SSE. Because of SD-WAN's rapid adoption to support direct internet access, organizations can leverage existing products to serve as a foundation for their SASE implementations. This would be true for both do-it-yourself as well as managed services implementations. If you are still in the planning stages for an integrated SASE deployment, you aren't alone. 


What could be the cause of growing API security incidents?

Critical infrastructure sectors such as manufacturing and energy & utilities, which typically rely on legacy systems, ranked unfavourably when measured on a number of metrics. They ranked worst on the percentage of API security incidents in the last 12 months, with 79% of manufacturing and 78% of energy & utilities respondents saying they had experienced incidents, of which they were aware. Energy & utilities companies were also the least likely to have a full inventory of APIs and know which return sensitive data, with just 19% confident about this issue. Manufacturing organizations found it most difficult to scale API security solutions, with just 30% saying they found it easy. Furthermore, real-time testing was at its lowest in energy & utilities (7%), whilst manufacturing, and energy & utilities were most likely to conduct API security testing less frequently than once per month, with 20% and 21% doing this, respectively. The relative lack of testing in these critical infrastructure sectors correlates with the number of API security incidents they have suffered in the last 12 months. 


Threat Actor Abuses LinkedIn's Smart Links Feature to Harvest Credit Cards

The campaign is not the first time that threat actors have abused LinkedIn's Smart Links feature — or Slinks, as some call it — in a phishing operation. But it marks one of the rare instances where emails containing doctored LinkedIn Slinks have ended up in user inboxes, says Brad Haas, senior intelligence analyst at Cofense. The phishing protection services vendor is currently tracking the ongoing Slovakian campaign and this week issued a report on its analysis of the threat so far. LinkedIn's Smart Links is a marketing feature that lets users who are subscribed to its Premium service direct others to content the sender want them to see. The feature allows users to use a single LinkedIn URL to point users to multiple marketing collateral — such as documents, Excel files, PDFs, images, and webpages. Recipients receive a LinkedIn link that, when clicked, redirects them to the content behind it. LinkedIn Slinks allows users to get relatively detailed information on who might viewed the content, how they might have interacted with it, and other details.


Clive Humby – data can predict nearly everything about running a business

You really need to think about three things: first, you need to think about what do I really need? In the grocery world, the past four weeks’ transactions compared to the year-on-year sales are much more insightful than having everything because you want to know what’s changed. How do sales compare from this Easter to last Easter, this Christmas to last Christmas? Understanding relative movement in data. The second thing is to reduces the level of granularity in your data into what I call “baskets of interest”. I am much more interested in the mix of groceries you buy than individual items. And the third thing, while you might have a warehouse of data with everything in probably every decision you make will need of less than half a per cent for the data. Not trying to analyse all of your data, all the time. If you are looking for trends you don’t need to look at all of the data, just look at 10 per cent of the data. People tend to over-engineer because the technology companies have told them to.


Data science engineer: A day in the life

Between communication, data engineering, meaningful result reporting, and more, data scientists have many goals. At Xactly, my daily goal is to illustrate to the rest of the organization and our customers the value of our data. Strategy and evangelization are a huge priority. It’s important to illustrate how data science is useful in other departments like engineering, marketing, customer experience, and sales. In the space of a day, this can be messy, requiring us to dig into the details of how data was created. From this, we hope to create new predictors that could be incorporated into our models. My team focuses on solving various technical problems across the organization daily. Over time, each day’s work contributes to achieving bigger goals. I see it as solving one or two subproblems per day, which over time, feeds into solving a larger problem that serves a bigger purpose. As we finish projects, we build on that success by developing new models and making new insights. For example, a recently deployed model achieved sales forecasting accuracy of nearly 100 percent. 


Universities Urged to Defend Sensitive Research From Hackers

Lawmakers should set a minimum standard around what constitutes acceptable security for any research institutions that are either federally funded or receive federal subsidies, Evanina told the committee. Much of government doesn't have a real understanding of the academic culture and has therefore taken a "search and replace" approach to regulation, in which nonprofit universities and for-profit businesses are expected to follow the same rules, Gamache said. Poorly designed federal mandates attempting to fix cybersecurity in higher education could actually cause harm, he warned. But over the past five years, Gamache says, a number of federal agencies have really tried to understand what the academic community is all about. The FBI has led the way in this effort by going all-in on initiatives such as the Academic Security and Counter Exploitation Program, and the Department of Commerce has also become more engaged, according to Gamache.



Quote for the day:

"The art of communication is the language of leadership." -- James Humes

Daily Tech Digest - May 12, 2022

SD-WAN and Cybersecurity: Two Sides of the Same Coin

SD-WAN is a natural extension of NGFWs that can leverage these devices’ content/context awareness and deep packet inspection. The same classification engines used by NGFWs to drive security decisions can also determine the best links to send traffic over. These engines can also guide queueing priorities, which in turn enables fine-grained quality-of-service (QoS) controls. ... Centralized cloud management is key to enabling incremental updates of these new features. Further, flexible policy-driven routing enables service chaining of new security features in the cloud rather than building these features into the SD-WAN customer premises equipment (CPE). For example, cloud-based services for advanced malware detection, secure web gateways, cloud-access security brokers, and other security features can be enabled via the SD-WAN platform, seamlessly bringing these and other next-gen security functions across the enterprise. The coordination between the cloud-based SD-WAN service and the on-premises SD-WAN CPE allows new security applications to benefit from both the convenience and proximity of an on-site device and the near-infinitely scalable computing power of the cloud.


Introducing AlloyDB for PostgreSQL: Free yourself from expensive, legacy databases

As organizations modernize their database estates in the cloud, many struggle to eliminate their dependency on legacy database engines. In particular, enterprise customers are looking to standardize on open systems such as PostgreSQL to eliminate expensive, unfriendly licensing and the vendor lock-in that comes with legacy products. However, running and replatforming business-critical workloads onto an open source database can be daunting: teams often struggle with performance tuning, disruptions caused by vacuuming, and managing application availability. AlloyDB combines the best of Google’s scale-out compute and storage, industry-leading availability, security, and AI/ML-powered management with full PostgreSQL compatibility, paired with the performance, scalability, manageability, and reliability benefits that enterprises expect to run their mission-critical applications. As noted by Carl Olofson, Research Vice President, Data Management Software, IDC, “databases are increasingly shifting into the cloud and we expect this trend to continue as more companies digitally transform their businesses. ...”


Visualizing the 5 Pillars of Cloud Architecture

If you understand your cloud infrastructure, you can more confidently ensure your customers can rely on your organization. With the ability to constantly meet your workload demands and quickly recover from any failures, your customers can count on you to consistently meet their service needs with little interruption to their experience. A great way to increase reliability in your cloud infrastructure is to set key performance indicators (KPIs) that allow you to both monitor your cloud and alert the proper team members when something within the architecture fails. Using a cloud visualization platform to filter your cloud diagrams and create different visuals of current, optimal and potential cloud infrastructure allows you to compare what is currently happening in the cloud to what should be happening. ... Many factors can impact cloud performance, such as the location of cloud components, latency, load, instance size and monitoring. If any of these factors become a problem, it’s essential to have procedures in place that result in minimal deficiencies in performance. 


Zero Trust Does Not Imply Zero Perimeter

Don’t get me wrong, the concept of trusting the perimeter is fairly old-school/outdated and does come into conflict with more modern “cloud native” approaches. Remote users will also have issues with latency, especially if you require the users to VPN to your on-premises network and finally establish connectivity with the cloud. The theoretical modern approach is to not trust that perimeter. This doesn’t mean you have to get rid of it, but rather it’s not the default, since increasingly the perimeter is becoming more porous and ill-defined. This is as opposed to when moving to a “zero-trust” model, where everything needs to be proven for both the user identity and device prior to any data, application, assets and/or services (DAAS) being permitted to communicate to any services. Going further down memory lane, back in the day the perimeter used to mean that everything was located within your “castle” and perimeter-based system access was “all or nothing” by default. Once users were in, they were in, which also applies to any other type of actor, including malicious actors. Once the perimeter was breached, the malicious actor effectively had unlimited access to everything within the perimeter.


As Inflation Skyrockets, Is Now the Time to Pull Back on New IT Initiatives?

There are two big risks associated with pulling back, says Ken Englund, technology sector leader at business advisory firm EY Americas. Pulling back on projects may increase the risk of IT talent turnover, he warns. “Pausing or changing priorities for tactical, short-term reasons may encourage talent to depart for opportunities on other companies' transformational programs.” Also, given current inflationary pressure, “the cost to restart a project may be materially more expensive in the future than it is to complete today.” There's no doubt that pulling back on IT spend saves money over the short term, but short-sighted savings could come at the cost of long-term success. “If an organization must look to cut budgets, start with a strategic review of all projects, identifying which have the greatest possible impact and least amount of risk,” Lewis-Pinnell advises. Examine each project's total cost of ownership and rank them by cost and impact. Strategic selection of IT initiatives can help IT leaders manage through inflationary challenges. “Don’t be afraid to cut projects that aren’t bringing you enough benefit,” she adds.


Cyber-Espionage Attack Drops Post-Exploit Malware Framework on Microsoft Exchange Servers

CrowdStrike's analysis shows the modules are designed to run only in-memory to reduce the malware's footprint on an infected system — a tactic that adversaries often employ in long-running campaigns. The framework also has several other detection-evasion techniques that suggest the adversary has deep knowledge of Internet Information Services (IIS) Web applications. For instance, CrowdStrike observed one of the modules leveraging undocumented fields in IIS software that are not intended to be used by third-party developers. Over the course of their investigation of the threat, CrowdStrike researchers saw evidence of the adversaries repeatedly returning to compromised systems and using IceApple to execute post-exploitation activities. Param Singh, vice president of CrowdStrike's Falcon OverWatch threat-hunting services, says IceApple is different from other post-exploitation toolkits in that it is under constant ongoing development even as it is being actively deployed and used. 


Zero Trust, Cloud Adoption Drive Demand for Authorization

Hutchinson advises enterprises to leverage a model that combines traditional coarse-grained role-based access rules, or RBAC, with a collection of finer-grained attributes-based access rules, or ABAC, that can describe not only the consumer of a service but also the data, system, environment and function. "While traditional RBAC models are easier for developers and auditors to understand, they usually result in role explosion as the system struggles to provide finer-grained authorization. ABAC addresses that fine-grained need but sacrifices both management and understanding as the vast array of elements necessary for such a system makes organizing the data extremely complex," says Hutchinson. He adds: "A complex policy rule might say: 'A customer's transactional data can only be viewed via a secure device at a bank branch by an accredited teller who is from the same country of origin as the customer.' Instead of creating a plethora of new roles to cover all of the different possible combinations, I can use the teller role while also checking attributes that will provide device profile, location, accreditation status and country of origin.


The Cloud Native Community Needs to Talk about Testing

After getting feedback from the community, including DevOps and QA engineers, the general consensus I received was that it’s easy to tell that cloud native is a developing field that is still establishing its best practices. We can look into different examples of areas that are still maturing. Not that long ago, we started to hear about DevOps, which brought the concept of shorter and more efficient release cycles, which today feels like a normal standard. More recently, we saw GitOps following the same tracks, and we are seeing that more teams are using Git to manage their infrastructure. It’s my belief that cloud native testing will soon follow suit, where teams will not see testing as a burden or an extra amount of work that is only “nice to have” but something that is part of the process that will save them a lot of development time. I’m sure all of you reading this are tech enthusiasts like me and probably have been building and shipping products for quite some time, and I’m also sure many of you noticed that there are major challenges with integration testing on Kubernetes, especially when it comes to configuring tests in your continuous integration/continuous delivery (CI/CD) pipelines to follow a GitOps approach.


Hybrid work: Best practices in times of uncertainty

Humans are social creatures who require some contact with others, but determining the right balance between proximity and contact in the virtual workplace is difficult – too much contact can be exhausting, and too little can lead to isolation. Work to find a balance that can help support your staff as they navigate the nuanced world of remote work. It’s also important to adopt a blended approach to technology and physical space. A combination of co-working spaces and telepresence tools can be just what you need to facilitate contact and collaboration among employees. This allows for an open environment where people can both collaborate and decompress in their own way while also bringing a sense of connection that may be impossible to achieve in a virtual environment. ... It’s not easy to develop policies that address both business and human needs in remote and hybrid work environments, but one thing remains certain: flexibility paired with autonomy is essential for success. CIOs play a critical role in creating an environment of flexibility and autonomy for staff members – one that can help support their professional development while also fostering increased satisfaction and success.


10 best practices to reduce the probability of a material breach

Cybersecurity is as much about humans as it is about technology. Organizations see fewer breaches and faster times to respond when they build a “human layer” of security, create a culture sensitive to cybersecurity risks, build more effective training programs, and develop clear processes for recruiting and retaining cyber staff. ... Organizations with no breaches invest in a mix of solutions, from the fundamentals such as email security and identity management, to more specialized tools such as security information and event management systems (SIEMs). These organizations are also more likely to take a multi-layered, multi-vendor security approach to monitor and manage risks better through a strong infrastructure. ... With digital and physical worlds converging, the attack surfaces for respondents are widening. Organizations that prioritize protection of interconnected IT and OT assets experience fewer material breaches and faster times to detect and respond.



Quote for the day:

"Good leaders make people feel that they're at the very heart of things, not at the periphery." -- Warren G. Bennis