Showing posts with label netops. Show all posts
Showing posts with label netops. Show all posts

Daily Tech Digest - January 16, 2025

How DPUs Make Collaboration Between AppDev and NetOps Essential

While GPUs have gotten much of the limelight due to AI, DPUs in the cloud are having an equally profound impact on how applications are delivered and network functions are designed. The rise of DPU-as-a-Service is breaking down traditional silos between AppDev and NetOps teams, making collaboration essential to fully unlock DPU capabilities. DPUs offload network, security, and data processing tasks, transforming how applications interact with network infrastructure. AppDev teams must now design applications with these offloading capabilities in mind, identifying which tasks can benefit most from DPUs—such as real-time data encryption or intensive packet processing. ... AppDev teams must explicitly design applications to leverage DPU-accelerated encryption, while NetOps teams need to configure DPUs to handle these workloads efficiently. This intersection of concerns creates a natural collaboration point. The benefits of this collaboration extend beyond security. DPUs excel at packet processing, data compression, and storage operations. When AppDev and NetOps teams work together, they can identify opportunities to offload compute-intensive tasks to DPUs, dramatically improving application performance. 


The CFO may be the CISO’s most important business ally

“Cybersecurity is an existential threat to every company. Gone are the days where CFOs could only be fired if they ran out of money, cooked the books, or had a major controls outage,” he said. “Lack of adequate resourcing of cybersecurity is an emerging threat to their very existence.” This sentiment reflects the reality that for most organizations cyber threat is the No. 1 business risk today, and this has significant implications for the strategic survival of the enterprise. It’s time for CISOs and CFOs to address the natural barriers to their relationship and develop a strategic partnership for the good of the company. ... CISOs should be aware of a few key strategies for improving collaboration with their CFO counterparts. The first is reverse mentoring. Because CFOs and CISOs come from differing perspectives and lead domains rife with terminology and details that can be quite foreign to the other, reverse mentoring can be important for building a bridge between the two. In such a relationship, the CISO can offer insights into cybersecurity, while simultaneously learning to communicate in the CFO’s financial language. This mutual learning creates a more aligned approach to organizational risk. Second, CISOs must also develop their commercial perspective.


Establishing a Software-Based, High-Availability Failover Strategy for Disaster Mitigation and Recovery

No one should be surprised that cloud services occasionally go offline. If you think of the cloud as “someone else’s computer,” then you recognize there are servers and software behind it all. Someone else is doing their best to keep the lights on in the face of events like human error, natural disasters, and DDoS and other types of cyberattacks. Someone else is executing their disaster response and recovery plan. While the cloud may well be someone else’s computer, when there is a cloud outage that affects your operations, it is your problem. You are at the mercy of someone else to restore services so you can get back online. It doesn’t have to be that way. Cloud-dependent organizations can adopt strategies that allow them to minimize the risk someone else’s outage will knock them offline. One such strategy is to take advantage of hybrid or multi-cloud architecture to achieve operational resiliency and high availability through service redundancy through SANless clustering. Normally a storage area network (SAN) uses local storage to configure clustered nodes on-premises, in the cloud, and to a disaster recovery site. It’s a proven approach, but because it is hardware dependent, it is costly in terms of dollars and computing resources, and comes with additional management demands.


Trusted Apps Sneak a Bug Into the UEFI Boot Process

UEFI is a kind of sacred space — a bridge between firmware and operating system, allowing a machine to boot up in the first place. Any malware that invades this space will earn a dogged persistence through reboots, by reserving its own spot in the startup process. Security programs have a harder time detecting malware at such a low level of the system. Even more importantly, by loading first, UEFI malware will simply have a head start over those security checks that it aims to avoid. Malware authors take advantage of this order of operations by designing UEFI bootkits that can hook into security protocols, and undermine critical security mechanisms like UEFI Secure Boot or HVCI, Windows' technology for blocking unsigned code in the kernel. To ensure that none of this can happen, the UEFI Boot Manager verifies every boot application binary against two lists: "db," which includes all signed and trusted programs, and "dbx," including all forbidden programs. But when a vulnerable binary is signed by Microsoft, the matter is moot. Microsoft maintains a list of requirements for signing UEFI binaries, but the process is a bit obscure, Smolár says. "I don't know if it involves only running through this list of requirements, or if there are some other activities involved, like manual binary reviews where they look for not necessarily malicious, but insecure behavior," he says.


How CISOs Can Build a Disaster Recovery Skillset

In a world of third-party risk, human error, and motivated threat actors, even the best prepared CISOs cannot always shield their enterprises from all cybersecurity incidents. When disaster strikes, how can they put their skills to work? “It is an opportunity for the CISO to step in and lead,” says Erwin. “That's the most critical thing a CISO is going to do in those incidents, and if the CISO isn't capable doing that or doesn't show up and shape the response, well, that's an indication of a problem.” CISOs, naturally, want to guide their enterprises through a cybersecurity incident. But disaster recovery skills also apply to their own careers. “I don't see a world where CISOs don't get some blame when an incident happens,” says Young. There is plenty of concern over personal liability in this role. CISOs must consider the possibility of being replaced in the wake of an incident and potentially being held personally responsible. “Do you have parachute packages like CEOs do in their corporate agreements for employability when they're hired?” Young asks. “I also see this big push of not only … CISOs on the D&O insurance, but they're also starting to acquire private liability insurance for themselves directly.”


Site Reliability Engineering Teams Face Rising Challenges

While AI adoption continues to grow, it hasn't reduced operational burdens as expected. Performance issues are now considered as critical as complete outages. Organizations are also grappling with balancing release velocity against reliability requirements. ... Daoudi suspects that there are a series of contributing factors that have led to the unexpected rise in toil levels. The first is AI systems maintenance: AI systems themselves require significant maintenance, including updating models and managing GPU clusters. AI systems also often need manual supervision due to subtle and hard-to-predict errors, which can increase the operational load. Additionally, the free time created by expediting valuable activities through AI may end up being filled with toilsome tasks, he said. "This trend could impact the future of SRE practices by necessitating a more nuanced approach to AI integration, focusing on balancing automation with the need for human oversight and continuous improvement," Daoudi said. Beyond AI, Daoudi also suspects that organizations are incorrectly evaluating toolchain investments. In his view, despite all the investments in inward-focused application performance management (APM) tools, there are still too many incidents, and the report shows a sentiment for insufficient observability instrumentation.


The Hidden Cost of Open Source Waste

Open source inefficiencies impact organizations in ways that go well beyond technical concerns. First, they drain productivity. Developers spend as much as 35% of their time untangling dependency issues or managing vulnerabilities — time that could be far better spent building new products, paying down technical debt, or introducing automation to drive cost efficiencies. ... Outdated dependencies compound the challenge. According to the report, 80% of application dependencies remain un-upgraded for over a year. While not all of these components introduce critical vulnerabilities, failing to address them increases the risk of undetected security gaps and adds unnecessary complexity to the software supply chain. This lack of timely updates leaves development teams with mounting technical debt and a higher likelihood of encountering issues that could have been avoided. The rapid pace of software evolution adds another layer of difficulty. Dependencies can become outdated in weeks, creating a moving target that’s hard to manage without automation and actionable insights. Teams often play catch-up, deepening inefficiencies and increasing the time spent on reactive maintenance. Automation helps bridge this gap by scanning for risks and prioritizing high-impact fixes, ensuring teams focus on the areas that matter most.


The Virtualization Era: Opportunities, Challenges, and the Role of Hypervisors

Choosing the most appropriate hypervisor requires thoughtful consideration of an organization’s immediate needs and long-term goals. Scalability is a crucial factor, as the selected solution must address current workloads and seamlessly adapt to future demands. A hypervisor that integrates smoothly with an organization’s existing IT infrastructure reduces the risks of operational disruptions and ensures a cost-effective transition. Equally important is the financial aspect, where businesses must look beyond the initial licensing fees to account for potential hidden costs, such as staff training, ongoing support, and any necessary adjustments to workflows. The quality of support the vendor provides, coupled with the strength of the user community, can significantly influence the overall experience, offering critical assistance during implementation and beyond. For many businesses, partnering with Managed Service Providers (MSPs) brings an added layer of expertise, ensuring that the chosen solution delivers maximum value while minimizing risk. The ongoing evolution and transformation of the virtualization market presents both challenges and opportunities. As the foundation for IT efficiency and flexibility, hypervisors remain central to these changes.

 

DORA’s Deadline Looms: Navigating the EU’s Mandate for Threat Led Penetration Testing

It’s hard to defend yourself, if you have no idea what you’re up against, and history and countless news stories are evidence that trying to defend against all manner of digital threat is a fool’s errand. As such, the first step to approaching DORA compliance is profiling not only the threat actors that target the financial services sector, but specifically which actors, and by what Tactics Techniques and Procedures (TTPs), you are likely to be attacked. However, first before you can determine how an actor may view and approach you, you need to know who you are. So, the first profile that must be built is of your own business. Not just financial services, but what sector/aspect, what region, and finally what is the specific risk profile based on the critical assets in organizational, and even partner, infrastructures. The second profile begins with the current population of known actors that target the financial services industry. It then moves to narrowing to the actors known to be aligned with the specific targeting profile. From there, leveraging industry standard models such as the MITRE ATT&CK framework, a graph is created of each actor/group’s understood goals and TTPs, including their traditional and preferred methods of access and exploitation, as well as their capabilities for evasion, persistence and command and control.


With AGI looming, CIOs stay the course on AI partnerships

“The immediate path for CIOs is to leverage gen AI for augmentation rather than replacement — creating tools that help human teams make smarter, faster decisions,” Nardecchia says. “There are very promising results with causal AI and AI agents that give an autonomous-like capability and most solutions still have a human in the loop.” Matthew Gunkel, CIO of IT Solutions at the University of California at Riverside, agrees that IT organizations should keep moving forward regardless of the growing delta between AI technology milestones and actual AI implementations. ... “The rapid advancements in AI technology, including projections for AGI and ACI, present a paradox: While the technology races ahead, enterprise adoption remains in its infancy. This divergence creates both challenges and opportunities for CIOs, employees, and AI vendors,” Priest says. “Rather than speculating on when AGI/ACI will materialize, CIOs would be best served to focus on what preparation is required to be ready for it and to maximize the value from it.” Sid Nag, vice president at Gartner, agrees that CIOs should train their attention on laying the foundation for AI and addressing important matters such as privacy, ethics, legal issues, and copyright issues, rather than focus on AGI advances.



Quote for the day:

"When you practice leadership,The evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - April 12, 2024

Architecture is about tradeoffs. It is about spending money on one thing over another. It is about decisions. So when you tell me to develop productivity I think that is a great measure. But I also start wondering about quality. About satisfaction. Is that productivity a measure of one person? Every person? What toolset did they use? The same goes with content generation. An AI image is neat at first? But do we get tired of them? How do I measure the value of human created? Is there profit in that? Order management, electricity use, all of these measures are valuable. So when you hear about an AI business case… do you have a business case? Are the benefits REAL? ... Everything comes with pros/cons and we need a system in place to handle this change rate. This is true of all major human endeavors. Think of child workers during industrialization. Or the horrible cost to humanity of the intensity of urbanization and how it has endangered our planet. Only now are we coming to grips with all of that structural complexity. And even that is going to require decades more commitment. Technology and, specifically, AI is no different. 


The Pitfalls of Periodic Penetration Testing & What to Do Instead

While periodic penetration testing can provide a snapshot of your organization’s security posture, it often fails to account for the dynamic nature of cyber threats. Organizations must continuously test their security measures to effectively mitigate risks to identify and neutralize emerging threats in real-time. Organizations can leverage various approaches and tools to implement continuous cybersecurity testing, such as the Atomic Red Team by Red Canary, an open-source library of tests mapped to the MITRE ATTACK framework that security teams can use to simulate adversarial activity and validate their defenses. These tools can help prioritize and mitigate potential cyber-attacks by automating security testing and providing valuable insights into adversary tactics and techniques. Endpoint security testing and firewall testing are excellent starting points for implementing continuous cybersecurity testing. By simulating phishing emails, running PowerShell commands at endpoints, and monitoring VPN logins at the firewall level, organizations can proactively identify potential vulnerabilities and mitigate them before cyber attackers can exploit them. 


Generative AI Sucks: Meta’s Chief AI Scientist Calls For A Shift To Objective-Driven AI

Unlike current AI, which excels in narrow domains without grasping causality, objective-driven AI would be capable of causal reasoning and understanding the relationships between actions and outcomes. This shift would allow AI to plan and adapt strategies in real time, grounded in a nuanced comprehension of the physical and social world. Objective-driven AI is not just an incremental improvement but a leap toward machines that can truly collaborate with humans, offering insights, generating solutions, and understanding the broader impact of their actions. This vision represents a significant shift towards creating AI that can navigate the complexity of the real world with intelligence and purpose. ... Despite these challenges, LeCun is optimistic about the future, firmly believing that AI will eventually surpass human intelligence across all domains. This conviction is not grounded in wishful thinking but in a clear-eyed assessment of technological progress and the potential for groundbreaking scientific discoveries. However, LeCun also emphasizes that this evolution will not happen overnight or without a radical rethinking of our current approaches to AI development.


Strategies to cultivate collaboration between NetOps and SecOps

Collaborative culture starts at the top. The leaders of these teams need to collaborate and communicate consistently. They cannot have a turf war over each team’s roles and must understand each team’s responsibilities. Whether it’s shadowing a member of the other team for a day or taking opportunities to get to know other teams outside of work, establishing a collaborative culture is an important long-term investment for mutual success. ... AI and automation will blur the lines between these two teams, as projects focused on these elements are ones that can be tackled together. For example, having your vulnerability management tool automatically open tickets for other IT teams can create a feeling that the security team is dumping vulnerabilities over the wall.  ... The SecOps team tends to secure the budget as they take in risks to the company. For instance, if a project is done how does it reduce risks and if the project is not done, what risks does the company retain? The automation and AI tools are using network traffic (packet data) to create workflows/automation and AI tools are using this data to feed into Large Language Models. Both teams can utilize this AI LLM to solve network and security issues.


Down with Detection Obsession: Proactive Security in 2024

Now, as boards of directors and C-suites are expected to be more security savvy, they are asking important risk questions of their CISOs: Given all this spending on finding our problems, are we secure? Are we better off than we were a year ago or two years ago, or three years ago? And few security executives can answer those questions with comfort, because historically they were not focused on addressing risk, they were focused on discovering the risk. As time goes on and the security leader’s role becomes more business-centric, the benefits of taking a more proactive approach to security will continue to grow and shine. For example, the role of vulnerability management in providing improved risk reduction, achieving regulatory compliance, and cost savings. By actively seeking and addressing vulnerabilities, organizations can significantly reduce their overall attack surface, minimizing their chances of security breaches, data leaks and more. Many industries, like health care and financial services, have strict regulations governing the protection of sensitive data.


Agile development can unlock the power of generative AI - here's how

"The beauty of Agile is you see the fruits of your work quicker. You get feedback. And that's true with innovation generally -- the faster you can speed up cycle times, the better." Hakan Yaren, CIO at APL Logistics, said to ZDNET that another benefit of Agile is that it's well-suited to the modern digital environment. Analyst Gartner suggested that 80% of technology products and services this year will be built by people who are not technology professionals. Yaren said Agile -- with its focus on joined-up thinking and cross-business approaches -- is a good fit for the decentralized nature of modern IT. "With AI and cloud, the barriers to entry are becoming lower and people in the business are making IT decisions," he said. "Agile is the right methodology to deal with many of these processes because of the speed of change." However, Yaren has a warning for IT professionals: The complexities you face could increase as more line-of-business employees test emerging technologies. "Trying to connect these solutions, and making sure they're secure, reliable, and you can connect the dots across them, is becoming even more challenging," he said.


The benefits of leveraging hybrid cloud automation

To optimise hybrid cloud architecture, most experts endorse automation, given its flexibility, simplicity and scalability. They believe automation is necessary to draw some of the benefits of the cloud back into the on-premises systems and the hybrid architecture. Automation can ensure a more seamless way for end-users to requisition an organisation’s services, regardless of its location. As more applications move into hybrid and multi-cloud environments, companies can explore several ways to automate manual processes taking place in the cloud. Crucial cloud automation aspects cover deployment, provisioning, compliance, configuration management, scaling, and more. Hybrid cloud automation examples include establishing a network in the cloud and configuring cloud servers. Cloud automation can also be used for managing server capacity, spinning up new environments and resources, configuring software and systems, rolling out software configurations whenever required, taking systems online and offline as needed to balance the load, scaling across data centres, and moving into a public cloud environment when handling front-end web services or high workloads that are on- or off-premises.


Why strategists should embrace imperfection

We’re seeing paralysis as people wait for some kind of equilibrium or stasis to reemerge. Or they get nervous and leap before they look, whether it’s an acquisition or some other move. We wanted to lay out a different path that involves confidently stepping into risk by using a set of six mindsets that we put under the broad heading of imperfectionism. Imperfectionism sounds like a bad thing, but what we mean is accepting the ambiguity of not having perfect knowledge before making strategic moves. ... The kind of uncertainty that we face today really is twofold. One is the type we see in the newspaper, which is economic uncertainty, external shocks like the war in Ukraine. But there’s a much more fundamental kind of uncertainty we face now, which is very rapid technological change. Artificial intelligence, automation, programmable biology, and other disruptions are blurring industry boundaries and what it means to be a competitor in a particular industry. We’re also seeing the rise of supercompetitors like Apple, Amazon, and Google, which can operate across many industry spaces. 


What the American Privacy Rights Act Could Mean for Data Privacy

For companies that collect and monetize consumer data, the APRA could mean making changes to the way they do business. The APRA sets out requirements for issues like data minimization, transparency, consumer choice and rights, data protection, and executive responsibility. “It basically means that now they’re going to be able to collect less data: good for consumers and not so good if you're a company that needs all that data,” Antonio Sanchez, principal cybersecurity evangelist at Fortra, a cybersecurity and automation software company, tells InformationWeek. The draft legislation drills down to data privacy at an operational level. For example, it requires covered entities to appoint a privacy or data security officer or officers. “There is a real sense that a significant part of managing a modern privacy program is not found in the rules themselves but in the operation that gives life to those rules,” says Hughes. If the APRA goes into effect, covered entities will have 180 days to comply with its requirements. Non-compliance after that timeline could be met with enforcement action. 


Data Stewardship Best Practices

Business leaders must understand what makes data stewards successful in order to find the ideal candidates for the role. Johnson outlined some of the characteristics best suited for stewards. Coming from both business and IT: Many times, data stewards do best when they have a background in both technology and line-of-business department work. Johnson referred to them as “purple people” – having skills and experience spanning these two different job positions. Data stewards should be multiskilled, as well as “bilingual” and “bicultural” ... Acting as bridges: Data stewards should be able to translate both simple and complex information and communicate it in written or oral form. Johnson recommended that they also have a good sense of objectivity, distinguishing fact from fiction, and be able to envision what challenges and issues a company might face in the future. Excited by data: Thinking globally and participating in an influence culture, data stewards should get immersed in the ideas surrounding good Data Governance and better data handling. “When you’re talking to somebody, and they get really excited about data and their eyes light up, and they’re all energized and stuff, it’s a good sign – they might be fit for a steward role,” Johnson said.



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson

Daily Tech Digest - March 23, 2023

10 cloud mistakes that can sink your business

It’s a common misconception that cloud migration always leads to immediate cost savings. “In reality, cloud migration is expensive, and not having a full and complete picture of all costs can sink a business,” warns Aref Matin, CTO at publishing firm John Wiley & Sons. Cloud migration often does lead to cost savings, but careful, detailed planning is essential. Still, as the cloud migration progresses, hidden costs will inevitably appear and multiply. “You must ensure at the start of the project that you have a full, holistic cloud budget,” Matin advises. Cloud costs appear in various forms. Sometimes they’re in plain sight, such as the cost of walking away from an existing data facility. Yet many expenses aren’t so obvious. ... A major challenge facing many larger enterprises is leveraging data spread across disparate systems. “Ensuring that data is accessible and secure across multiple environments, on-premises as well as on applications running in the cloud, is an increasing headache,” says Darlene Williams, CIO of software development firm Rocket Software.


Developed countries lag emerging markets in cybersecurity readiness

The drastic difference in cybersecurity preparedness between developed and developing nations is likely because organizations in emerging markets started adopting digital technology more recently compared to their peers in developed markets. “That means many of these companies do not have legacy systems holding them back, making it relatively easier to deploy and integrate security solutions across their entire IT infrastructure,” the report said, adding that technology debt — the estimated cost or assumed impact of updating systems — continues to be a major driver of the readiness gap. The Cisco Cybersecurity Readiness Index categorizes companies in four stages of readiness — beginner, formative, progressive, and mature. ... Identity management was recognized as the most critical area of concern. Close to three in five respondents, or 58% of organizations, were either in the formative or beginner category for identity management. However, 95% were at least at some stage of deployment with an appropriate ID management application, the report said.


Observability will transform cloud security

Is this different than what you’re doing today for cloud security? Cloud security observability may not change the types or the amount of data you’re monitoring. Observability is about making better sense of that data. It’s much the same with cloud operations observability, which is more common. The monitoring data from the systems under management is mostly the same. What’s changed are the insights that can now be derived from that data, including detecting patterns and predicting future issues based on these patterns, even warning of problems that could emerge a year out. ... Cloud security observability looks at a combination of dozens of data streams for a hundred endpoints and finds patterns that could indicate an attack is likely to occur in the far or near future. If this seems like we are removing humans from the process of making calls based on observed, raw, and quickly calculated data, you’re right. We can respond to tactical security issues, such as a specific server under attack, with indicating alerts, which means it should block the attacking IP address.


Operational Resilience: More than Disaster Recovery

Disaster recovery is fairly narrow in its definition and typically viewed in a small timeframe. Operational resilience is much broader, including aspects like the sort of governance you’ve put in place; how you manage operational risk management; your business continuity plans; and cyber, information, and third-party supplier risk management. In other words, disaster recovery plans are chiefly concerned with recovery. Operational resilience looks at the bigger picture: your entire ecosystem and what can be done to keep your business operational during disruptive events. ... Part of the issue is that cyber is still seen as special. The discussion always seems to conclude with the assumption that the security team or IT department is managing a particular risk, so no one else needs to worry about it. There is a need to demystify cybersecurity. It’s only with the proper business understanding and risk ownership that you can put proper resilience mechanisms in place.


Nvidia builds quantum-classical computing system with Israel’s Quantum Machines

The DGX Quantum deploys Nvidia’s Grace Hopper superchip and its technology platform for hybrid quantum-classical computers coupling so-called graphics processing units (GPUs) and quantum processing units (QPUs) in one system. It is supported by Quantum Machine’s flagship OPX universal quantum control system designed to meet the demanding requirements of quantum control protocols, including precision, timing, complexity, and ultra-low latency, according to the Israeli startup. The combination allows “researchers to build extraordinarily powerful applications that combine quantum computing with state-of-the-art classical computing, enabling calibration, control, quantum error correction and hybrid algorithms,” Nvidia said in a statement. Tech giants like Google, Microsoft, IBM, and Intel are all racing to make quantum computing more accessible and build additional systems, while countries like China, the US, Germany, India, and Japan are also pouring millions into developing their own quantum abilities.


Leveraging Data Governance to Manage Diversity, Equity, and Inclusion (DEI) Data Risk

In organizations with a healthy data culture, the counterpart to compliance is data democratization. Democratization is the ability to make data accessible to the right people at the right time in compliance with all relevant legal, regulatory, and contractual obligations. Leaders delegate responsibility to stewards for driving data culture by democratizing data so that high-quality data is available to the enterprise in a compliant manner. Such democratized data enables frontline action by placing data into the hands of people who are solving business problems. Stewards democratize data by eliminating silos and moving past the inertia that develops around sensitive data sources. An essential aspect of democratization, therefore, is compliance. Stewards will not be able to democratize data without a clear ability to assess and manage risk associated with sensitive data. That said, it is critical that DEI advocates limit democratization of DEI data, especially at the outset of their project or program. 


The Future of Data Science Lies in Automation

Much data science work is done through machine learning (ML). Proper employment of ML can ease the predictive work that is most often the end goal for data science projects, at least in the business world. AutoML has been making the rounds as the next step in data science. Part of machine learning, outside of getting all the data ready for modeling, is picking the correct algorithm and fine-tuning (hyper)parameters. After data accuracy and veracity, the algorithm and parameters have the highest influence on predictive power. Although in many cases there is no perfect solution, there’s plenty of wiggle room for optimization. Additionally, there’s always some theoretical near-optimal solution that can be arrived at mostly through calculation and decision making. Yet, arriving at these theoretical optimizations is exceedingly difficult. In most cases, the decisions will be heuristic and any errors will be removed after experimentation. Even with extensive industry experience and professionalism, there is just too much room for error.


What NetOps Teams Should Know Before Starting Automation Journeys

Like all people, NetOps professionals enjoy the results of a job well done. So, while the vision of their automation journey may be big, it’s important to start with a small, short-term project that can be completed quickly. There are a couple of benefits to this approach:Quick automation wins will give NetOps teams confidence for future projects. Projects like this can generate data and feedback that NetOps teams can convert into learnings and insights for the next project. This approach can also be applied to bigger, more complex automation projects. Instead of taking on the entire scale of the project at once, NetOps teams can break it down into smaller components. ... The advantages of this approach are the same as with the quick-win scenario: There is a better likeliness of success and more immediate feedback and data to guide the NetOps teams through this entire process. Finally, as talented as most NetOps teams are, they are not likely to have all of the automation expertise in-house at any given time. 


Reducing the Cognitive Load Associated with Observability

Data points need to be filtered and transformed in order to generate the proper signals. Nobody wants to be staring at a dashboard or tailing logs 24/7, so we rely on alerting systems. When an alert goes off, it is intended for human intervention, which means transforming the raw signal into an actionable event with contextual data: criticality of the alert, environments, descriptions, notes, links, etc. It must be enough information to direct the attention to the problem, but not too much to drown in noise. Above all else, a page alert should require a human response. What else could justify interrupting an engineer from their flow if the alert is not actionable? When an alert triggers, analysis begins. While we eagerly wait for anomaly detection and automated analysis to fully remove the human factor from this equation, we can use a few tricks to help our brains quickly identify what’s wrong. ... Thresholds are required for alert signals to trigger. When it comes to visualization, people who investigate and detect anomaly need to consider these thresholds too. Is this value in data too low or unexpectedly high?


The Urgent Need for AI in GRC and Security Operations: Are You Ready to Face the Future?

Another area where AI tools are transforming the IT industry is security operations. Businesses face an ever-increasing number of cyberthreats, and it can be challenging to stay ahead of these threats. AI tools can help by automating many security operations, such as threat detection and incident response. They can also help with risk assessment by analyzing large amounts of data and identifying potential vulnerabilities. The benefits of AI tools in the IT industry are clear. By automating processes and improving decision-making, businesses can save time and money while reducing the risk of errors. AI tools can also help businesses to be more agile and responsive to changes in the market. However, the use of AI tools in the IT industry also presents some challenges. One of the key challenges is the need for specialized technical expertise. While AI tools can be user-friendly, businesses still need to have specialized expertise to use the tools effectively.



Quote for the day:

"People seldom improve when they have no other model but themselves." -- Oliver Goldsmith

Daily Tech Digest - July 27, 2021

How AI Algorithms Are Changing Trading Forever

The Aite Group in its report "Hedge Fund Survey, 2020: Algorithmic Trading" argues that the main reason for the growing popularity of algorithms in trading is to try to reduce the influence of the human factor on the market due to its high volatility. The economic fallout from COVID-19 has seen a record-breaking drop in the American, European, and Chinese stock markets. And only a few months later, measures to stimulate the economy were able to stop the fall and reverse the downtrend up. Thus, we get the first task of Algorithmic Trading - risk reduction in a market with high volatility. The second global advantage of algorithmic trading lies in the ability to analyze the potential impact of trade on the market. This can be especially useful for Hedge Funds and institutional investors who handle large sums of money with a visible effect on price movements. The third fundamental advantage of trading algorithms is protection from emotions. Traders and investors, like all living people, experience the emotions of fear, greed, lost profits, and others. These emotions have a negative impact on performance and results.


How to prevent corporate credentials ending up on the dark web

Employees are the weakest link in any organization’s security posture. A Tessian report found that 43% of US and UK employees have made mistakes that resulted in cybersecurity repercussions for their organizations. Phishing scams, including emails that try to trick employees into sharing corporate login details, are particularly common. Educating employees on cyber threats and how to spot them is crucial to mitigating attacks. However, for training to be effective, it needs to consist of more than just repetitive lectures. In the report mentioned above, 43% of respondents said a legitimate-looking email was the reason they fell for a phishing scam, while 41% of employees said they were fooled because the email looked like it came from higher up. Live-fire security drills can help employees familiarize themselves with real-world phishing attacks and other password hacks. Safety awareness training should also teach workers the importance of good practices like using a virtual private network (VPN) when working from home and making social media accounts private.


IT leadership: 4 ways to find opportunities for improvement

Technology leaders should regularly use their own technology to better identify pain points and opportunities for improvements. That means that I should be teaching and using the same systems that faculty does to understand their experience through their lens. I should be meeting regularly with them and generating a Letterman-style Top 10 list of the things I hate most about my technology experience. This is something to do with the students, too. What do they hate most about the technology at the university? And how can we partner with them to address these issues over the next 12 months? Several years ago, for example, we reexamined our application process. If a prospective student wanted to submit an application, we required them to generate a unique username and password. If the one they chose was already taken, they needed to continue creating alternate versions until they eventually landed upon one that was available. If someone began the application process and logged off to complete it later, then forgot their username and password, they’d have to start all over again. It was absurd.


Data Management In The Age Of AI

AI is increasingly converging the traditional high-performance computing and high-performance data analytics pipelines, resulting in multi-workload convergence. Data analytics, training and inference are now being run on the same accelerated computing platform. Increasingly, the accelerated compute layer isn’t limited to GPUs⁠—it now involves FPGAs, graph processors and specialized accelerators. Use cases are moving from computer vision to multi-modal and conversational AI, and recommendation engines are using deep learning while low-latency inference is used for personalization on LinkedIn, translation on Google and video on YouTube. Convolutional neural networks (CNN) are being used for annotation and labeling to transfer learning. And learning is moving to federated learning and active learning, while deep neural networks (DNN) are becoming even more complex with billions of hyper-parameters. The result of these transitions is different stages within the AI data pipelines, each with distinct storage and I/O requirements.


SASE: Building a Migration Strategy

Gartner's analysts say that "work from anywhere" and cloud-based computing have accelerated cloud-delivered SASE offerings to enable anywhere, anytime secure access from any device. Security and risk management leaders should build a migration plan from the legacy perimeter and hardware-based offerings to a SASE model. One hindrance to SASE adoption, some security experts tell me, is that organizations lack visibility into sensitive data and awareness of threats. Too many enterprises have separate security and networking teams that don't share information and lack an all-encompassing security strategy, they say. "While the vendors are touting SASE as the end-all solution, the key to success would depend upon how well we define the SASE operating model, particularly when there are so many vendors coming up with SASE-based solutions," says Bengaluru-based Sridhar Sidhu, senior vice president and head of the information security services group at Wells Fargo. Yask Sharma, CISO of Indian Oil Corp., says that as data centers move to the cloud, companies need to use SASE to enhance security while controlling costs.


How to Bridge the Gap between Netops and Secops

If you were designing the managerial structure for a software development firm from scratch today, it’s very unlikely that you would separate NetOps and SecOps in the first place. Seen from the perspective of 2021, many of the monitoring and visibility tools that both teams seek and use seem inherently similar. Unfortunately, however, the historical development of many firms has not been that simple. Teams and remits are not designed from the ground up with rationality in mind – instead they emerge from a complex series of interactions and ever-changing priorities. This means that different teams often end up with their own priorities, and can come to believe that they are more important than those of other parts of your organization. This is seen very clearly in the distinction between SecOps and NetOps teams in many firms. At the executive level, your network exists in order to facilitate connections – between systems and applications but above all between staff members. Yet for many NetOps teams, the network can come to be seen as an end in itself.


The future of data science and risk management

“Enterprise data is growing nearly exponentially, and it is also increasing in complexity in terms of data types,” said Morgan. “We have gone way past the time when humans could sift through this amount of data in order to see large-scale trends and derive actionable insights. The platforms and best practices of data science and data analytics incorporate technologies which automate the analytics workflows to a large extent, making dataset size and complexity much easier to tackle with far less effort than in years past. “The second value-add is to leverage machine learning, and ultimately artificial intelligence, to go beyond historical and near-real-time trend analysis and ‘look into the future’, so to speak. Predictive analysis can unveil new customer needs for products and services and then forecast consumer reactions to resultant offers. Equally, predictive analytics can help uncover latent anomalies that lead to much better predictions about fraud detection and potentially risky behaviour. “Nothing can foretell the future with 100% certainty, but the ability of modern data science to provide scary-smart predictive analysis goes well beyond what an army of humans could do manually.”


DevOps Observability from Code to Cloud

DevOps has transformed itself in the last few years, completely changing from what we used to see as siloed tools connected together to highly integrated, single-pane-of-glass platforms. Collaboration systems like JIRA, Slack, and Microsoft Teams are connected to your observability tools such as Datadog, Dynatrace, Splunk, and Elastic. Finally, IT Service management tools like PagerDuty are also connected in. Tying these high-in-class tools together on one platform, such as the JFrog Platform, yields high value to the enterprises looking for observability workflow. The security folks also need better visibility into an enterprise’s systems, to look for vulnerabilities. A lot of this information is available in Artifactory and Amazon Web Services‘ Xray, but how do we leverage this information in other partner systems like JIRA and Datadog? It all starts with JFrog Xray’s security impact, where we can generate the alert to Slack and robust security logs to Datadog to be analyzed by your Site Reliability Engineer. A PagerDuty incident that’s also generated from Xray can then be used to create a JIRA issue quickly.


6 Global Megatrends That Are Impacting Banking’s Future

The line between digital and physical has blurred, with consumers who once preferred brick-and-mortar engagements now researching, shopping and buying using digital channels more than ever. This trend is expected to increase across all industries. While organizations have enabled improved digital engagement over the past several months, there are still major pain points, mostly with speed, simplicity and cross-channel integration during the ‘first mile’ of establishing a relationship. The retail industry already understands that consumers are becoming increasingly impatient, wanting the convenience and transparency of eCommerce and the service and humanization of physical stores. In banking, consumers are diversifying their financial relationships, moving to fintech and big tech providers that can open relationships in an instant and personalize experiences. According to Brett King, founder of Moven and author of the upcoming book, ‘The Rise of Technosocialism’, “The ability to acquire new customers at ‘digital scale’ will impact market share and challenge existing budgets for branches. ..."


Understanding Contextual AI In The Modern Business World

Contextual AI can be divided into three pillars that help make businesses become more visible to the people they want to reach. In the same sense, when a business is looking for a partner, it has to be sure that a prospect can offer the right services to fulfill its goals. Contextual AI aims to deliver that. The technology allows a brand to enhance its understanding of consumer interests. It is easy to make assumptions about consumer interests in different sectors, but difficult to prove them. ... In previous years, contextual AI was seen as an enhancing technology, but not an essential one. Now, the recognition of contextual AI as more than simply enhancing is growing. Businesses are constantly looking for more cost-effective solutions to their problems, and contextual AI offers one solution to fit that bracket. If you look at a similar alternative, such as behavioral advertising, it is heavily reliant on data — and lots of it. The huge amounts of data required to make this a success means that businesses have to implement a successful collection, analysis and then reporting solution in order to leverage it effectively. This can be a costly process if a business does not have large economies of scale.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - July 21, 2020

Why You Should Consider A Career In Data Science.

Another thing that makes data science so popular is that it accepts people of all sorts, regardless of their background and domain. People in literally any industry can move into data science and still do amazing work in their industry with the help of data science. People from the banking and finance industry, food and health sector, arts, climate science, engineering, and physics can all couple their domain knowledge and expertise with data science and make ground-breaking progress. You do not necessarily need to have a BSc or MSc in computer science or engineering in order to start a career in data science but rather couple data science with whatever career you currently have, find a problem you can solve with the combination of both and do something. Data science in combination with Artificial Intelligence, Machine Learning, Robotics, and the Internet of Things has the power to literally automate anything in order to make lives easy. Automation of tasks can also bring huge progress to companies since work can now be done faster. Also, when work is done by humans, there is a natural tendency to be inconsistent and make human-related errors. Automating tasks handles these problems and gives us better results in a shorter time.


Top CEOs Agree That Culture Is The Key Before, During And After Crisis

“The One Carnegie approach, starting with myself and our executive team, really means coming together around common values. It doesn’t matter the country, language, race or religion, we wanted people to come together around our strong values. Just like Dale Carnegie would say, ‘Try honestly to see things from the other person’s point of view.’ “The essence is that we want to value each person as an individual and as part of an organization. People value transparency and openness. When the pandemic hit, our One Carnegie foundation helped us tremendously. From a cultural standpoint, there was transparency, and we communicated very clearly what was happening and how we were responding to it. We shifted our entire global in-person training business to live online trainer delivery, and this could not have happened without a culture of working together and moving fast. Our strong culture created alignment in all 86 countries. People felt safe asking questions and working together. The results we are seeing are extraordinary.” CEO Gary Terrinoni of Brooklyn Hospital, founded in 1839 and cited as the number one safety net hospital in America, shared, “We had to move people around to be able to support the issues that we had with COVID-19, and people just stepped up. 


What to look for when modernizing the Data Lake

Whether a company is born into the digital world or has a more traditional business, they must invest and excel in tech advances such as mobility, cloud computing, and most importantly, advancedanalytics and data science. Doing so will equip them with the right tools to innovate their existing operations and deliver a seamless experience to customers. However, it isn’t that easy to achieve this goal. To realize the benefits of advances in technologies, organizations must leverage all their data. This requires modernizing their data architectures. In other words, organizations must unlock andmigratetheir data from multiple, heterogeneous systems including legacy mainframe systems and enterprise applications, and quickly process and refine it for consumption in AI and ML initiatives. Modern, cloud-based data lakes provide enterprises the agility and flexibility they need to store and process massive volumes of diverse data. Things to keep in mind when architecting a modern data lake. Data architectures are constantly evolving. Companies are adding new sources of data, offloading data to new target systems for processing and refining, and adding new analytical tools and solutions to their technology infrastructure.


If software architects' soft skills fail, so does the business

The history of software development contains rich lessons, both good and bad. We assume that current capabilities (like elastic scale) just appeared one day because of some clever developer, but those ideas were often born of hard lessons. Pets.com represents an early example of hard lessons learned. Pets.com appeared in the early days of the internet, hoping to become the Amazon.com of pet supplies. Fortunately, they had a brilliant marketing department, which invented a compelling mascot: a sock puppet with a microphone that said irreverent things. The mascot became a superstar, appearing in public at parades and national sporting events. Unfortunately, management at Pets.com apparently spent all the money on the mascot, not on infrastructure. Once orders started pouring in, they weren't prepared. The website was slow, transactions were lost, deliveries delayed, and so on … pretty much the worst-case scenario. So bad, in fact, that the business closed shortly after its disastrous Christmas rush, selling the only remaining valuable asset (the mascot) to a competitor. What the company needed was elastic scale: the ability to spin up more instances of resources, as needed. 


Successful innovation doesn’t have to be disruptive—it’s often small, incremental, and fast

The tension between breakthrough and incremental approaches can be found in most settings, not just online businesses. For example, medicine has had a long tradition of searching for interventions that have transformative outcomes on patients. But perhaps, as surgeon and researcher Atul Gawande argues, success “is not about episodic, momentary victories, though they do play a role. It is about the longer view of incremental steps that produce sustained progress.” That, Gawande continues, “is what making a difference really looks like. In fact, it is what making a difference looks like in a range of endeavors.” One endeavor, manufacturing, has known and practiced this approach for decades. In Toyota’s renowned production system, for example, real-time experiments by its factory workers to eradicate problems are an integral part of its continuous improvement system. Even there, people are expected to form clearly articulated, testable hypotheses and explain their logic for each attempted improvement. Of course, breakthrough and disruptive innovation will continue to play an important role in driving growth, as there are limits to incremental approaches.


Determining and overcoming blockchain fatigue

“Blockchain fatigue sets in mainly due to the fact that not many people fully understand what this technology offers and so have difficulties trying to implement it into their business or process. This lack of understanding can lead to frustration and consequently a dwindling enthusiasm for the technology. “While still in its infancy, blockchain is perhaps stretching the patience of those who were initially overly optimistic about the technology. The continued lack of full-scale implementation of blockchain is creating this sense of fatigue as there are still no end-to-end fully deployable solutions available for enterprises. “Most of the work still focuses on small pilot projects and this, coupled with technology immaturity, lack of standards and a general misunderstanding of how blockchain technology works and what it offers, is also contributing to the market feeling fatigued with blockchain.” While usage of blockchain within various sectors continues to grow and develop beyond its best known function within cryptocurrencies, a recent study from Deloitte shows that a rising number of senior executives and practitioners worldwide are seeing the technology as overhyped, with 55% stating this in 2020. With this in mind, what must organisations do to overcome blockchain fatigue and continue to keep faith?


How Quantum Mechanics will Change the Tech Industry

In a digital computer, the system requires bits to increase its processing power. Thus, in order to double the processing power, you would simply double the amount of bits — this is not at all similar in quantum computers. A quantum computer uses qubits, the basic unit of quantum information, to provide processing capabilities unmatched even by the world’s most powerful supercomputers. How? Superposed qubits can simultaneously tackle a number of potential outcomes (or states, to be more consistent with our previous segments). In comparison, a digital computer can only crunch through one calculation at a time. Furthermore, through entanglement, we are able to exponentially amplify the power of a quantum computer, particularly when comparing this to the efficiency of traditional bits in a digital machine. To visualise the scale, consider the sheer amount of processing power each qubit provides, and now double it. But there’s a catch — even the slightest vibrations and temperature changes, referred to by scientists as “noise”, can cause quantum properties to decay and eventually, disappear altogether. While you can’t observe this in real time, what you will experience is a computational error.


Remote work is the new normal. But the tech problems won't go away

Once the technical issues are overcome, there is much to be gained from an off-premise workforce. Employees themselves seem to draw a better work-life balance out of telecommuting; in fact, three-quarters of UK employees have reported not wanting to go back to the office full-time. Half of the business leaders surveyed by Riverbed named a better work-life balance as a bottom-line benefit for their employees as a result of remote working. An equal proportion of respondents also mentioned savings from office space, and 43% said that they expected flexible working to increase productivity. "In a year's time, I believe the biggest difference to everyday work will be that people will be much more available, without all of the complications and logistics that we have always known, and this will make them more efficient and productive," says Bombagi. Since the start of the crisis, he has noticed that he can fit in up to eight virtual customer meetings on a given day, where he could previously only do two, and only if they were both based in London.  His working day used to be planned around logistics: "If I'm going to be on the Tube, I know I can't make a call. If I'm driving somewhere, I can make a call, but I can't do a presentation. If I'm on a plane, apart from some email, I can't really do anything," says Bombagi.


Quantum Computing: Looking Ahead To Endless Possibilities

It’s a strange behavior of quantum mechanics whereby the more complex the calculation is, the more impressive the algorithm becomes. Sometimes the result of square root acceleration is trumped by completing calculations in a logarithm of the time — so exponentially faster. Essentially, unlike the computers we know and use, it’s not a simulation or manufactured programmatic function that’s doing the calculating — it’s the quantum world, which needs to be maintained at almost absolute zero temperature with no interruptions or interactions with its surroundings. We’re so far away from these realities in an applicatory sense, but the fact that we know they are there — and in a few special cases, they already exist — is enough of a reason to begin thinking. If we don’t acknowledge the potential and possibilities now, by the time it does become application-worthy, the AI contingent will have already missed the boat. The aforementioned "few special cases" so far include the likes of Microsoft, IBM and Intel, as well as Google. They are further ahead than anyone else has been in history to unlocking the scope of quantum computing. To be able to wade through vast swathes of data laden with millions and billions of constraints, all in the blink of an eye.


Bringing NetOps Up to Speed With DevOps

Fortunately for NetOps teams, myriad networking vendors today readily offer pre-built, certified solutions for DevOps platforms, making it easier to get started on a cloud-native journey by automating activities such as device onboarding and configuration changes. This way, network administrators can leverage existing vendor partnerships, in-house knowledge and technology that is already proven within the larger IT environment. Additionally, network engineers shouldn’t need—and won’t have the extra time—to become top-notch developers to take advantage of programmability during their cloud-native journey. Developing basic programming skills is advantageous, but network management systems that offer Python scripting, a consistent set of APIs and webhooks can perform the “heavy lifting” when it comes to enabling extensibility with third-party IT platforms. Today, this level of extensibility includes being able to integrate with third-party IT service management tools. A common use case that can realize significant time savings and greater network and application availability is to auto-trigger and assign an incident ticket when a performance SLA is breached. 



Quote for the day:

"The task of leadership is not to put greatness into humanity, but to elicit it, for the greatness is already there." -- John Buchan