Daily Tech Digest - April 28, 2022

MPLS, SDN, even SD-WAN can give you the network observability you need

The starting point in traffic management is to examine your router policies to see whether you’re picking routes correctly, but sometimes even controlling routing policies won’t get your flows going along the routes you want. If that’s the case, you have a traffic-management issue to address. The best tools to add traffic management capability are MPLS and SDN. MPLS lets routers build routes by threading an explicit path through routers. SDN eliminates the whole concept of adaptive routing and convergence by having a central controller maintain a global route map that it gives to each SDN switch, and that it updates in response to failures or congestion. If your network consists of a VPN service and a complicated LAN, SDN is likely the better option. If you actually have a complex router network, MPLS is likely the right choice. With either MPLS or SDN, you know where your flows are because you put them there. There’s also the option of virtual networking, if neither MPLS nor SDN seems to fit your needs. Almost all the major network vendors offer virtual networks that use a second routing layer, and by putting virtual-network routers at critical places you can create explicit routes for your traffic.


Build desktop and mobile UIs with Blazor Hybrid apps

There’s a lot to like about this approach to UIs. For one, it builds on what I consider to be the key lesson of the last decade on the web: We need to design our APIs first. That makes UI just another API client, using REST and JSON to communicate with microservices. We can then have many different UIs working against the same back end, all using the same calls and having the same impact on our service. It simplifies design and allows us to predictably scale application architectures. At the same time, a fixed set of APIs means that service owners can update and upgrade their code without affecting clients. That approach led to the development of concepts like the Jamstack, using JavaScript, APIs, and Markup to deliver dynamic static websites, simplifying web application design and publishing. Blazor Hybrid takes those concepts and brings them to your code while skipping the browser and embedding a rendering surface alongside the rest of your application. You can work offline where necessary, a model that becomes even more interesting when working with locked-down environments such as the Windows 11 SE educational platform.


Parallel streams in Java: Benchmarking and performance considerations

The Stream API brought a new programming paradigm to Java: a declarative way of processing data using streams—expressing what should be done to the values and not how it should be done. More importantly, the API allows you to harness the power of multicore architectures for the parallel processing of data. There are two kinds of streams.A sequential stream is one whose elements are processed sequentially (as in a for loop) when the stream pipeline is executed by a single thread. A parallel stream is split into multiple substreams that are processed in parallel by multiple instances of the stream pipeline being executed by multiple threads, and their intermediate results are combined to create the final result. A parallel stream can be created only directly on a collection by invoking the Collection.parallelStream() method. The sequential or parallel mode of an existing stream can be modified by calling the BaseStream.sequential() and BaseStream.parallel() intermediate operations, respectively. A stream is executed sequentially or in parallel depending on the execution mode of the stream on which the terminal operation is initiated.


Design-First Approach to API Development

Design-First begins with both technical and non-technical individuals from each of the functions involved participating in the process of writing a contract that defines the purpose and function of the API (or set of APIs). Obviously, this approach requires some time upfront spent on planning. This phase aims to ensure that when it comes time to start coding, developers are writing code that won't need to be scrapped and rewritten later down the line. This helps create iterative, useful APIs that, in turn, lead to a better, more scalable API program — and value to your business — as a whole. Regardless of which approach you choose, the most critical thing to think about is how to deliver positive experiences for stakeholders, including end-users, third-party or in-house developers, and even folks from the rest of the company who may have a role. I think of APIs like technology ambassadors — the digital face of a brand — as they form a network of internal and external connections. And as such, they should be designed and crafted with care, just like any other product or service that your company offers.

At Western Digital, we recognize the importance of doing our part to contain global temperature rise. So it was important to pledge and set our ambitious goal to help limit the increase to less than 1.5°C by 2030. While we’ve made significant improvements the past few years, we have a lot of work to do to achieve our goal. It is particularly challenging to achieve the goal while the factory is going through expansion. So that’s why we rely on 4IR technologies to drive eco-efficiency. ... Our strategy hinges on three approaches: accountability, digital, and partnerships. First, it’s about setting bold climate commitments that demonstrate our accountability to making science-based progress. For more than three decades, we’ve been setting publicly facing environmental goals. And we continue to commit to bold goals, including the intention to source 100 percent of our global electricity needs from renewable sources by 2025 and to be carbon neutral in our global operations by 2030. Along with that, we’re harnessing digital and Industry 4.0 advanced-manufacturing technologies to reduce our carbon footprint and, to your earlier point, drive greater resilience. 


7 leadership traits major enterprises look for in a CIO

A resourceful CIO is able to blend prior experience with multiple variables, such as accepted frameworks, methodologies, and cultural and political landscapes. “In essence, the new CIO, when effectively using resourcefulness, is in the best position to challenge the current paradigm of the enterprise and chart the path forward,” says Greg Bentham, vice president of cloud infrastructure services at business advisory firm Capgemini Americas. Joining a major enterprise and establishing trust within a new organization is perhaps the most challenging task a CIO will ever face. Many obstacles will inevitably surface and need to be resolved. While prior experience and frameworks can be applied, reality suggests that history never exactly repeats itself. Top enterprises expect that their new CIO will possess the knowledge and creativity to overcome even the most challenging barriers. The best way to become resourceful is through direct experience gathered throughout an IT career, particularly experiences that spurred organizational changes, Bentham says.


How to use data analytics to improve quality of life

In a perfect world, employees in labor-intensive roles will be re-trained to tackle more creative and complex problem-solving tasks. Less-experienced workers will be able to quickly skill up with AI-augmented on-the-job training. In some cases, AI-equipped cameras are already enhancing, rather than replacing, human labor. By monitoring assembly-line production, tracking worker steps and processing findings into actionable feedback, this data technology can deliver valuable movement-efficiency training to employees on the line – including how to safely and efficiently move and operate in spaces shared by humans and robots. Yet who’s footing the bill here? How do business owners benefit from the adoption (and, of course, investment in) data technology? First and foremost is the obvious and immediate benefit of reducing lost labor hours due to injuries and worker-compensation-related costs. But there is also the knock-on effect of promoting a healthier and (hopefully) happier workforce. The question then becomes how to gain the buy-in of labor. 


Building the right tech setup for a multi-office organisation

IT and facilities teams sometimes rely on strong third-party relationships to enable multi-location collaboration. This often means having a good relationship with a telecommunication service provider (or providers, depending on the internet services available in the various locations), complete with a service level agreement that specifies the exact network performance standards to be met. Likewise, it’s essential to have built trust between all the companies that deliver the organisation’s collaboration technology, whether hardware or software. It may also be that IT teams rely on local managed service providers to provide on-site support on their behalf. Collaboration, and the technology that enables it, has become a core tenet of the post-pandemic workplace – but it means different things to different organisations. Sometimes, it’s about internal communication using voice and videoconferencing, messaging, and webinars. Perhaps these integrate with an office productivity suite or customer relationship management software, enhancing productivity and communication with colleagues, clients, or prospects. Other times, it’s about implementing the best solutions for your office space.


How edge computing can bolster aviation sector innovation

Edge cloud networks can provide continuous high-bandwidth connectivity between aircraft and the internet. This enables data transmission even in mid-air, with edge computing providing a filter for the most relevant information – reducing overall bandwidth usage. Servers on the ground can then selectively pull data from the edge servers on the aircraft for more detailed, real-time analysis – helping to spot potential problems and advise immediate remedial actions. This high-bandwidth connectivity can send the information needed to allow airlines to predict components and other failures before their occurrence and empower organisations to take the necessary steps to address these faults. Systems can generate automatic notifications from the plane to enable ground crews to prepare for repairs at the next landing point. Maintenance teams can more easily manage their parts and resources with access to detailed information. Edge computing also holds potential for enabling aviation operators to develop a mobility infrastructure that incorporates intelligent connected vehicles within a more extensive transportation network. 


How AI can close gaps in cybersecurity tech stacks

There are five strategies cybersecurity vendors should rely on to help their enterprise customers close widening gaps in their security tech stacks. Based on conversations with endpoint security, IAM, PAM, patch management and remote browser isolation (RBI) providers and their partners, these strategies are beginning to emerge in a dominate way among the cybersecurity landscape. ... Enterprises need better tools to assess risks and vulnerabilities to identify and close gaps in tech stacks. As a result, there’s a growing interest in using Risk-Based Vulnerability Management (RBVM) that can scale across cloud, mobile IoT and IIoT devices today. Endpoint Detection & Response (EDR) vendors are moving into RBVM with vulnerability assessment tools. Leading vendors include CODA Footprint, CyCognito, Recorded Future, Qualys and others. Ivanti’s acquisition of RiskSense delivered its first product this month, Ivanti Neurons for Risk-Based Vulnerability Management (RBVM). What’s noteworthy about Ivanti’s release is that it is the first RBVM system that relies on a state engine to measure, prioritize and control cybersecurity risks to protect enterprises against ransomware and advanced cyber threats.



Quote for the day:

"The art of communication is the language of leadership." -- James Humes

Daily Tech Digest - April 27, 2022

Think of search as the application platform, not just a feature

As a developer, the decisions you make today in how you implement search will either set you up to prosper, or block your future use cases and ability to capture this fast-evolving world of vector representation and multi-modal information retrieval. One severely blocking mindset is relying on SQL LIKE queries. This old relational database approach is a dead end for delivering search in your application platform. LIKE queries simply don’t match the capabilities or features built into Lucene or other modern search engines. They’re also detrimental to the performance of your operational workload, leading to the over-use of resources through greedy quantifiers. These are fossils—artifacts of SQL from 60 or 70 years ago, which is like a few dozen millennia in application development. Another common architectural pitfall is proprietary search engines that force you to replicate all of your application data to the search engine when you really only need the searchable fields.


What Is a Data Reliability Engineer, and Do You Really Need One?

It’s still early days for this developing field, but companies like DoorDash, Disney Streaming Services, and Equifax are already starting to hire data reliability engineers. The most important job for a data reliability engineer is to ensure high-quality data is readily available across the organization and trustworthy. When broken data pipelines strike (because they will at one point or another), data reliability engineers should be the first to discover data quality issues. However, that’s not always the case. Insufficient data is first discovered downstream in dashboards and reports instead of in the pipeline – or even before. Since data is rarely ever in its ideal, perfectly reliable state, the data reliability engineer is more often tasked with putting the tooling (like data observability platforms and testing) and processes (like CI/CD) in place to ensure that when issues happen, they’re quickly resolved. The impact is conveyed to those who need to know. Much like site reliability engineers are a natural extension of the software engineering team, data reliability engineers are an extension of the data and analytics team.


Mitigating Insider Security Threats in Healthcare

Some security experts say that risks involving insiders and cloud-based data are often misjudged by entities. "One of the biggest mistakes entities make when shifting to the cloud is to think that the cloud is a panacea for their security challenges and that security is now totally in the hands of the cloud service," says privacy and cybersecurity attorney Erik Weinick of the law firm Otterbourg PC. "Even entities that are fully cloud-based must be responsible for their own privacy and cybersecurity, and threat actors can just as readily lock users out of the cloud as they can from an office-based server if they are able to capitalize on vulnerabilities such as weak user passwords or system architecture that allows all users to have access to all of an entity's data, as opposed to just what that user needs to perform their specific job function," he says. Dave Bailey, vice president of security services as privacy and security consultancy CynergisTek, says that when entities assess threats to data within the cloud, it is incredibly important to develop and maintain solid security practices, including continuous monitoring.


Is cybersecurity talent shortage a myth?

It is a combination of things but yes, in part technology is to blame. Vendors have made the operation of the technologies they designed an afterthought. These technologies were never made to be operated efficiently. There is also a certain fixation to technologies that just don’t offer any value yet we keep putting a lot of work towards them, like SIEMs. Unfortunately, many technologies are built upon legacy systems. This means that they carry those systems’ weaknesses and suboptimal features that were adapted from other intended purposes. For example, many people still manage alerts using cumbersome SIEMs that were originally intended to be log accumulators. The alternative is ‘first principles’ design, where the technology is developed with a particular purpose in mind. Some vendors assume that their operators are the elites of the IT world, with the highest qualifications, extensive experience, and deep knowledge into every piece of adjoining or integrating technology. Placing high barriers to entry on new technologies—time-consuming qualifications or poorly-delivered, expensive courses—contributes to the self-imposed talent shortage.


How Manufacturers Can Avoid Data Silos

The first and most important step you can take to break down silos is to develop policies for governing the data. Data governance helps to ensure that everyone in a factory understands how the data should be used, accessed, and shared. Having these policies in place will help prevent silos from forming in the first place. According to Gartner data, 87 percent of manufacturers have minimal business intelligence and analytics expertise. The research found these firms less likely to have a robust data governance strategy and more prone to data silos. Data governance efforts that improve synergy and maximize data effectiveness can help manufacturing companies reduce data silos. ... Another way to break down data silos is to cultivate a culture of collaboration. Encourage employees to share information and knowledge across departments. When everyone is working together, it will be easier to avoid duplication of effort and wasted time. To break down data silos, manufacturers should move to a culture that encourages collaboration and communication from the top down.


Top 7 metaverse tech strategy do's and don'ts

Like any other technology project, a metaverse project should support overall business strategy. Although the metaverse is generating a lot of buzz right now, it is only a tool, said Valentin Cogels, expert partner and head of EMEA product and experience innovation at Bain & Company. "I don't think that anyone should think in terms of metaverse strategy; they should think about a customer strategy and then think about what tools they should use," Cogels said. "If the metaverse is one tool they should consider, that's fine." Approaching with a business goals-first approach also helps to refine the available choices, which leaders can then use to build out use cases. Serving the business goals and customers you already have is critical, said Edward Wagoner, CIO of digital at JLL Technologies, the property technology division of commercial real estate services company JLL Inc., headquartered in Chicago. "When you take that approach, it makes it a lot easier to think how [the products and services you deliver] would change if [you] could make it an immersive experience," he said.


Digital begins in the boardroom

Boards need to guard against the default of having a “technology expert” that everyone turns to whenever a digital-related issue comes onto the agenda. Rather than being a collection of individual experts, everyone on a board should have a good strategic understanding of all important areas of business – finance, sales and marketing, customer, supply chain, digital. The best boards are a group of generalists – each with certain specialisms – who can discuss issues widely and interactively, not a series of experts who take the floor in turn while everyone else listens passively. There is much that can be done to raise levels of digital awareness among executives and non-executives. Training courses, webinars, self-learning online – all these should be on the agenda. But one of the most effective ways is having experts, whether internal or external, come to board meetings to run insight sessions on key topics. For some specialist committees, such as the audit and/or risk committees, bringing in outside consultants – on cyber security, for example – is another important feature.


4 reasons diverse engineering teams drive innovation

Diverse teams can also help prevent embarrassing and troubling situations and outcomes. Many companies these days are keen to infuse their products and platforms with artificial intelligence. But as we’ve seen, AI can go terribly wrong if a diverse group of people doesn’t curate and label the training datasets. A diverse team of data scientists can recognize biased datasets and take steps to correct them before people are harmed. Bias is a challenge that applies to all technology. If a specific class of people – whether it’s white men, Asian women, LGBTQ+ people, or other – is solely responsible for developing a technology or a solution, they will likely build to their own experiences. But what if that technology is meant for a broader population? Certainly, people who have not been historically under-represented in technology are also important, but the intersection of perspectives is critical. A diverse group of developers will ensure you don’t miss critical elements. My team once developed a website for a client, for example, and we were pleased and proud of our work. But when a colleague with low vision tested it, we realized it was problematic.


Bringing Shadow IT Into the Light

IT teams are understaffed and overwhelmed after the sharp increase in support demands caused by the pandemic, says Rich Waldron, CEO, and co-founder of Tray.io, a low-code automation company. “Research suggests the average IT team has a project backlog of 3-12 months, a significant challenge as IT also faces renewed demands for strategic projects such as digital transformation and improved information security,” Waldron says. There’s also the matter of employee retention during the Great Resignation hinging in part on the quality of the tech on the job. “Data shows that 42% of millennials are more likely to quit their jobs if the technology is sub-par,” says Uri Haramati, co-founder and CEO at Torii, a SaaS management provider. “Shadow IT also removes some burden from the IT department. Since employees often know what tools are best for their particular jobs, IT doesn’t have to devote as much time searching for and evaluating apps, or even purchasing them,” Haramati adds. In an age when speed, innovation and agility are essential, locking everything down instead just isn’t going to cut it. For better or worse shadow IT is here to stay.


Log4j Attack Surface Remains Massive

"There are probably a lot of servers running these applications on internal networks and hence not visible publicly through Shodan," Perkal says. "We must assume that there are also proprietary applications as well as commercial products still running vulnerable versions of Log4j." Significantly, all the exposed open source components contained a significant number of additional vulnerabilities that were unrelated to Log4j. On average, half of the vulnerabilities were disclosed prior to 2020 but were still present in the "latest" version of the open source components, he says. Rezilion's analysis showed that in many cases when open source components were patched, it took more than 100 days for the patched version to become available via platforms like Docker Hub. Nicolai Thorndahl, head of professional services at Logpoint, says flaw detection continues to be a challenge for many organizations because while Log4j is used for logging in many applications, the providers of software don't always disclose its presence in software notes. 



Quote for the day:

"Go as far as you can see; when you get there, you'll be able to see farther." -- J. P. Morgan

Daily Tech Digest - April 26, 2022

The emerging risks of open source

Many enterprises have sought to make their open source lives easier by buying into managed services. It’s a great short-term fix, but it doesn’t solve the long-term issue of sustainability. No, the cloud hyperscalers aren’t strip miners, nefariously preying on the code of unsuspecting developers. But too often some teams fail to plan to contribute back to the projects upon which they depend. I stress some, as this tends to not be a corporation-wide issue, no matter the vendor. I’ve detailed this previously. Regardless, the companies offering these managed services tend to not have any control over the projects’ road maps. That’s not great for enterprises that want to control risk. Google is a notable exception—it tends to contribute a lot to key projects. Nor can they necessarily contribute directly to projects. As Mugrage indicates, for companies like Netflix or Facebook (Meta) that open source big projects, these “open source releases are almost a matter of employer branding—a way to show off their engineering chops to potential employees,” which means “you’re likely to have very little sway over future developments.” 


How to model uncertainty with Dempster-Shafer’s theory?

One of the main advantages of this theory is that we can utilize it for generating a degree of belief by taking all the evidence into account. This evidence can be obtained from different sources. The degree of belief using this theory can be calculated by a mathematical function called the belief function. We can also think of this theory as a generalization of the Bayesian theory of subjective probability. While talking about the degree of belief in some cases we find them as the property of probability and in some cases, they are not mathematical. Using this theory we can make answers to the questions that have been generated using the probability theory. This theory mainly consists of two fundamentals: Degree of belief and plausibility. We can understand this theory using some examples. Let’s say we have a person diagnosed with covid-19 symptoms and have a belief of 0.5 for a proposition that the person is suffering from covid-19. This will mean that we have evidence that makes us think strongly that the person is suffering from covid(a proposition is true) with a confidence of 0.5. However, there is a contradiction that a person is not suffering from covid with a confidence of 0.2.


The Other AI: Augmented Intelligence

With a clear view of the benefits augmented intelligence delivered by AR can provide, you may be excited to get started within your enterprise but unsure of where to begin. First, it's important to start by speaking to your field technicians and service agents to gauge their interest or any potential aversion to implementing the technology into their workspace. New technology can be intimidating to field service technicians who are used to completing tasks a certain way. Helping them to understand how the technology can enhance their jobs and make service experiences less challenging and more engaging will be key. Next, consider which devices are needed to implement the augmented intelligence platform. At a basic level, a smartphone or tablet is needed. Hands-free wearable glasses make it easier for technicians to accomplish tasks in the field and on the factory floor. Drone support goes even further with AR visual awareness and graphical guidance not previously available. Finally, you'll want to confirm the bandwidth and connectivity requirements of the augmented intelligence AR platform and associated devices to ensure your field service technicians are set up for success.


Writing Code Is One Thing, Learning to Be a Software Engineer Is Another

Software developers are always students of software development and whenever you think you know what you are doing, it will punch you in the face. Good developers are humble because software development crushes overconfidence with embarrassing mistakes. You cannot avoid mistakes, problems and disasters. Therefore, you need to be humble to acknowledge mistakes and need a team to help you find and fix them. When you start as a developer, you focus on creating code to meet the requirements. I used to think being a developer was just writing code. Software development has many other aspects to it, from design, architecture, unit testing to DevOps and ALM. Gathering requirements and clarifying assumption. There are many best practices such as SOLID principles, DRY (Don’t repeat yourself), KISS, and others. The best practices and fundamental skills have long-term benefits. This makes them hard for junior developers to understand because there is no initial benefit. Well-named code, designed to be easily tested, isn’t first draft code. It does more than work. It’s built to be easy to read, understand and change.


AI Set to Disrupt Traditional Data Management Practices

“They often don’t have the skill sets, or their organizations don’t put in place processes and tools and practices to really manage data management for AI specifically,” says Sallam. “So data-centric AI has the potential to disrupt what has been traditional data management practices as well as prevalent model-centric data science by making sure that AI-specific considerations like data bias, labeling, drift, are all in place in a consistent manner to improve the quality of models on an ongoing basis.” Are tools under development to address this need, or are organizations investing in solutions for it? Sallam says that some of the other trends on the list will contribute to improving data management around AI. Specifically, to address this gap, leading organizations are disrupting data management for AI by building out data fabrics on active metadata and investing in things like AI governance, she said. This data-centric AI trend is one of several Gartner highlighted in its report for 2022 and grouped with a few others under the title of activating dynamism and diversity. 


Growing prospects for edge datacentres

Edge operations require user organisations and suppliers to think beyond infrastructure and architectural needs. New automation and orchestration challenges will arise, often across transactional boundaries and occurring between different companies and industries, rather than just different parts of the network. They must also think about ownership of the software and infrastructure stack and the likely path of service engagement – be that through a telecoms operator, hyperscale public cloud provider or others. Providers of edge operational services also need to decide how they support multiple customers according to their individual needs. This will be especially necessary for applying operational-specific AI algorithms, and may result in multi-layered partner offerings. All this will require organisations to think more carefully about how they extend their datacentre operations to enable greater levels of edge processing, work with cloud providers or hook into another provider’s edge datacentre network. The biggest drivers for edge datacentres are coming from industry sectors where edge operations are already well established. 


The Metaverse needs to keep an eye on privacy to avoid Meta’s mistakes

Metaverse avatars are a conglomeration of all issues relating to privacy in the digital realm. As a user’s gateway to all Metaverse interactions, they can also offer platforms a lot of personal data to collect, especially if their tech stack involves biometric data, like tracking users’ facial features and expressions for the avatar’s own emotes. The risk of someone hacking biometric data is far scarier than hacking shopping preferences. Biometrics are often used as an extra security precaution, such as when you authorize payment on your phone using your fingerprint. Imagine someone stealing your fingerprints and draining your card with a bunch of transfers. Such breaches are not unheard of: In 2019, hackers got their hands on the biometric data of 28 million people. It’s scary to think about how traditional digital marketing might look in the Metaverse. Have you ever shopped for shoes online and then suddenly noticed your Facebook is filled with ads for similar footwear? That’s a result of advertisers using both cookies and your IP address to personalize your ads. 


The Most In-Demand Cyber Skill for 2022

Just when everybody hoped that the security environment could not be more challenging, recent world events have created a further substantial uptick in cyber-attacks. This has also increased the sense that maybe we should all care more about the security of everything we ever purchased and placed in the cloud. Not so much buyer’s remorse as a penitent desire to security upcycle anything in the cloud that might be more critical to the organization once the current threat landscape is taken into consideration. Zero trust, extended detection and threat response (XDR), SASE (secure access service edge) – almost all the hottest topics are about how to take the security standards that were (once-upon-a-time) applied as standard to traditional networks and *seamlessly* implement them across cloudenvironments. The number one position for cloud computing makes sense. It reflects the growing concern about cloud security and the gradual evolution of the requirement to ensure that each organization has a consistent security architecture that extends over and includes any important cloud solutions and services in use.


How SaaS Models Changed Content Creation

Content creation used to be a difficult, arduous and manual process. Creative visions were consistently hampered by workflow and technological limitations. The dilemmas of our past were based on technical feasibility. Now, those restrictions have become completely unshackled. It’s no longer a question of what’s possible to do, but rather what you want to do, and which path do you take to get there? SaaS evened the playing field. To understand what is possible now and what is yet to come, it’s important to distinguish between two areas within the umbrella-use of SaaS. First, we have true software as a service, which is software that runs on the internet and is accessed in the cloud. Google Workspace is an example of this, allowing users to create spreadsheets, documents and presentations that are stored on Google’s servers. (Disclosure: My company has a partnership with Google.) The software runs as a service for you to connect to from any device and edit your documents anywhere. It's persistent regardless of the computer you’re on, and documents can be edited by multiple users, even simultaneously.


Data parallelism vs. model parallelism – How do they differ in distributed training?

In data parallelism, the dataset is split into ‘N’ parts, where ‘N’ is the number of GPUs. These parts are then assigned to parallel computational machines. Post that, gradients are calculated for each copy of the model, after which all the models exchange the gradients. In the end, the values of these gradients are averaged. For every GPU or node, the same parameters are used for the forward propagation. A small batch of data is sent to every node, and the gradient is computed normally and sent back to the main node. There are two strategies using which distributed training is practised called synchronous and asynchronous. ... In model parallelism, every model is partitioned into ‘N’ parts, just like data parallelism, where ‘N’ is the number of GPUs. Each model is then placed on an individual GPU. The batch of GPUs is then calculated sequentially in this manner, starting with GPU#0, GPU#1 and continuing until GPU#N. This is forward propagation. Backward propagation on the other end begins with the reverse, GPU#N and ends at GPU#0. Model parallelism has some obvious benefits. It can be used to train a model such that it does not fit into just a single GPU.



Quote for the day:

"At the heart of great leadership is a curious mind, heart, and spirit." -- Chip Conley

Daily Tech Digest - April 25, 2022

How to avoid compliance leader burnout

Just as a CISO will be held responsible for a security breach, even if the incident was unforeseeable, a compliance leader is considered responsible for all aspects of compliance: getting the appropriate certifications and reports, making sure the company passes its audits, etc. But if traditional methods of compliance are used, the compliance leader has no actual oversight on whether those controls are running. For example, the compliance team may set up controls over user access, but if one control owner forgets to run their control, the resulting failure will likely be blamed on the compliance leader. ... Data-oriented compliance that automatically pulls data from primary sources can sift through a vast volume of data and give an early signal if it senses a problem that needs to be looked at by a security person or engineer. This makes it less likely that a compliance leader will be blindsided by a long-running failure to implement a control. When a control is built into processes that a department is already running, it’s less likely to be overlooked by that department—since the control is part of a process that’s operationally important to the company.


Simplify Cloud Deployment Through Teamwork, Strategy

Liu suggests that when striving for simplification, IT organizations should recognize that simplification of architectures is complex and can be disruptive. That means it’s important to identify the most opportune time that works for the whole organization. “When simplifying, don’t just think about components like network switches or storage,” she says. “If you focus on moving or simplifying one component, your simplification can invite a lot more complexity. Think about simplifying whole infrastructure solutions. Align at the solution- or service-level first.” Stuhlmuller advises that enterprise cloud teams should educate themselves on how networking is done, not only in their primary cloud, but in all public clouds. This allows them to develop a multi-cloud network architecture that will keep them from having to re-architect when – inevitably -- the day comes when the business requires support for a second or third public cloud provider. “Cloud teams supporting enterprise scale businesses discover that building with basic constructs quickly increases the complexity and requires resource intensive manual configuration,” he says.


Most Email Security Approaches Fail to Block Common Threats

Digging into where email defense breaks down, the firms found that, surprisingly, use of email client plug-ins for users to report suspicious messages continues to increase. Half of organizations are now using an automated email client plug-in for users to report suspicious email messages for analysis by trained security professionals, up from 37 percent in a 2019 survey. Security operations center analysts, email administrators, and an email security vendor or service provider are the groups most commonly handling these reports, although 78 percent of organizations notify two or more groups. Also, user training on email threats is now offered in most companies, the survey found: More than 99 percent of organizations offer training at least annually, and one in seven organizations offer email security training monthly or more frequently. “Training more frequently reduces a range of threat markers Among organizations offering training every 90 days or more frequently, the likelihood of employees falling for a phishing, BEC or ransomware threat is less than organizations only training once or twice a year,” according to the report.


Why private edge networks are gaining popularity

For edge computing to gain large-scale adoption across enterprises, APIs need to provide an abstraction layer that alleviates the intensive work of having developers write code to communicate with each system in a tech stack. Abstraction layers save developers’ time and streamline new app development. Alef’s approach looks at how they can capitalize on stable APIs to protect developers from dealing with complex tech stacks in getting work done. Edge device processors are getting more intelligent. The rapid gains in chip processor architectures make it possible to complete data capture, analytics and aggregated at the endpoint first before sending the result to cloud databases. In addition, endpoint devices’ growing intelligence makes it possible to offload more tasks, freeing up network latency in the process. ... All businesses need real-time data to grow. Small gains in visibility and control across an enterprise can deliver large cost savings and revenue gains. It’s because real-time data is very good at helping to identify gaps in cost, customer, revenue and service processes.


Deep Science: AI simulates economies and predicts which startups receive funding

Applying AI to due diligence is nothing new. Correlation Ventures, EQT Ventures and Signalfire are among the firms currently using algorithms to inform their investments. Gartner predicts that 75% of VCs will use AI to make investment decisions by 2025, up from less than 5% today. But while some see the value in the technology, dangers lurk beneath the surface. In 2020, Harvard Business Review (HBR) found that an investment algorithm outperformed novice investors but exhibited biases, for example frequently selecting white and male entrepreneurs. HBR noted that this reflects the real world, highlighting AI’s tendency to amplify existing prejudices. In more encouraging news, scientists at MIT, alongside researchers at Cornell and Microsoft, claim to have developed a computer vision algorithm — STEGO — that can identify images down to the individual pixel. While this might not sound significant, it’s a vast improvement over the conventional method of “teaching” an algorithm to spot and classify objects in pictures and videos.


Stack Overflow Exec Shares Lessons from a Self-Taught Coder

As a self-taught developer, Chan describes that life as an entry-level software engineer as “a really big surprise and shock.” Especially given his past experiences in the world of programmer job interviews. He was baffled by his previous experiences interviewing with large companies, finding himself “failing miserably,” he told the podcast audience. Tech interviews, he said, were “where it’s like, ‘I don’t even know what a red-black tree is, so please don’t ask me more interview questions about that kind of thing!'” By contrast, he’d known of Stack Overflow for years, and considered it the home of “some of the best engineers that I could possibly think of.” ... Chan recalled learning what all new managers learn: while you may have been good at your old position, “once you become a manager, the skillset is completely different.” Or, in his case, “You’re no longer working with computers and with code anymore. You’re working with people, right?” There were more conversations, and listening to people — but also a shift in thought. “This is not about code so much anymore,” he said. 


Founders’ Guide To Embedding Corporate Governance In Your Startup

It would do good for founders to have some role models when it comes to governance and read about the practices and philosophies deployed by them. However, they may have to look beyond the startup universe for that because good governance is usually a sustained phenomenon. Companies that have been in business for decades could only qualify for the same. In my view, the Tata Group in general but specifically under the stewardship of JRD Tata has been the epitome of good governance. Some leading IT services companies like Infosys could also be studied. One does not have to look far and toward the West for such role models. Founders will do well to remember that getting an up round (after passing through diligence) is not a validation that they are doing everything right. Many times, investments happen due to prevailing market sentiment and liquidity. This happens in spaces that are hot and market tailwinds compel investors to close transactions faster. However, such times don’t last forever. Often, when a fastidious investor comes in to write a big cheque, such transgressions come to light.


Improving Your Estimation Skills by Playing a Planning Game

When we look at a large, complex task and estimate how long it will take us to complete, we mentally break down the large task into smaller tasks. We then construct a mental story of how we will complete each smaller task. We identify the sequential relationship between tasks, their interconnectedness, and their prerequisites. We then integrate them into a connected narrative of how we will complete the large task. All of these activities are good, and indeed essential for completing any large task. However, by constructing this mental story, we slip out of estimation mode and into planning mode. This means that we focus upon the how-to’s, rather than thinking back to past experiences, of potential impediments and how they may extend the task duration. Planning is a bit like software development, whilst estimation is a bit like software testing. In development, we are trying to get something to work. So, if our initial approach is unsuccessful, we modify it or try something else. Once we have got it to work, we are generally satisfied and move onto solving the next problem.


How to be a smart contrarian in IT

Start with the end user or the most important stakeholders: Do they find the end results intriguing? Have you built a proof-of-concept solution that tests your hypotheses? Can they get some value and provide you with quality feedback from a minimal viable product (MVP)? Don’t over-engineer a solution to a problem that nobody cares about. Let your customers lead you to what matters and do just enough engineering from there. You’ll still need to add standard enterprise features such as security, user experience, and scale, but the goal is to add them to a product your client wants and values. ... Before you try to solve a problem, find out if anyone on your team or at your company has already solved that problem or has experience with it. Explore wikis and forums to see if solutions have been documented privately or publicly. Too often, we fail to ask questions because we don’t want to appear uninformed or unintelligent. Keep in mind that most people enjoy being asked for advice and would welcome the opportunity to answer a question, especially early in the process when they can help you save time and effort. 


Get ready for your evil twin

Accurately replicating the look and sound of a person in the metaverse is often referred to as creating a “digital twin.” Earlier this year, Jensen Haung, the CEO of NVIDIA gave a keynote address using a cartoonish digital twin. He stated that the fidelity will rapidly advance in the coming years as well as the ability for AI engines to autonomously control your avatar so you can be in multiple places at once. Yes, digital twins are coming. Which is why we need to prepare for what I call “evil twins” – accurate virtual replicas of the look, sound, and mannerisms of you (or people you know and trust) that are used against you for fraudulent purposes. This form of identity theft will happen in the metaverse, as it’s a straightforward amalgamation of current technologies developed for deep-fakes, voice emulation, digital-twinning, and AI driven avatars. And the swindlers may get quite elaborate. According to Bell, bad actors could lure you into a fake virtual bank, complete with a fraudulent teller that asks you for your information. Or fraudsters bent on corporate espionage could invite you into a fake meeting in a conference room that looks just like the virtual conference room you always use.



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - April 24, 2022

Zero-Trust For All: A Practical Guide

In general, zero-trust initiatives have two goals in mind: reduce the attack surface and increase visibility. To demonstrate this, consider the (common) scenario of a ransomware gang buying initial access to a company’s cloud through an underground initial-access broker and then attempting to mount an attack. In terms of visibility, “zero trust should stop that attack, or make it so difficult that it will be spotted much earlier,” said Greg Young, vice president of cybersecurity at Trend Micro. “If companies know the postures of their identities, applications, cloud workloads, data sources and containers involved in the cloud, it should make it exceedingly hard for attackers. Knowing what is unpatched, what is an untrusted lateral movement, and continuously monitoring the posture of identities really limits the attack surface available to them.” And on the attack-surface front, Malik noted that if the gang used a zero-day or unpatched vulnerability to gain access, zero trust will box the attackers in. “First, at some point the attackers will cause a trusted user or process to begin misbehaving,” he explained. 


Web3 Security: Attack Types and Lessons Learned

Expert adversaries, often called Advanced Persistent Threats (APTs), are the boogeymen of security. Their motivations and capabilities vary widely, but they tend to be well-heeled and, as the moniker suggests, persistent; unfortunately, it’s likely they will always be around. Different APTs run many different types of operations, but these threat actors tend to be the likeliest to attack the network layer of companies directly to accomplish their goals. We know some advanced groups are actively targeting web3 projects, and we suspect there are others who have yet to be identified. ... One of the most well-known APTs is Lazarus, a North Korean group which the FBI recently attributed as having conducted the largest crypto hack to date. ... Now that web3 lets people directly trade assets, such as tokens or NFTs, with almost instant finality, phishing campaigns are targeting its users. These attacks are the easiest way for people with little knowledge or technical expertise to make money stealing crypto. Even so, they remain a valuable method for organized groups to go after high-value targets, or for advanced groups to wage broad-based, wallet-draining attacks through, for example, website takeovers.


The New Face of Data Governance

In light of the changes in the nature of data, the level of data regulation, and the data democratization trend, it’s safe to say that the traditional, old, boring, data governance is dead. We can’t let it in the grave, as we need data governance more than ever today. Our job is thus to resurrect it, and give it a new face. ... Data governance should embrace the trends of operational analytics and data democratization, and ensure that anybody can use data at any time to make decisions with no barrier to access or understanding. Data democratization means that there are no gatekeepers creating a bottleneck at the gateway to data. This is worth mentioning, as the need for data governance to be secure and compliant often leads programs to create bottlenecks at the gateway to data, as the IT team is usually put in charge of granting access to data. Operational people can end up waiting hours until they manage to get access to a dataset. By then, they have already given up on their analysis. It’s important to have security and control, but not at the expense of the agility that data offers.


It’s time for businesses to embrace the immersive metaverse

Companies need to understand what’s possible in the metaverse, what’s already in use and what customers or employees will expect as more organizations create immersive experiences to differentiate their products and services. The possibilities may include improvements in what companies are doing now as well as revolutionary changes in the way companies operate, connect and engage with customers and employees to increase loyalty. How can leaders start to identify opportunities in the metaverse? Start, as always, with low-hanging fruit, like commerce and brand experiences that can benefit from immersive support. Also consider the technology that can enable what you need. From an architectural standpoint, it’s helpful to think of immersive experiences as a three-layer cake. The top layer is where users get access via systems of engagement. The middle layer is where messages are sent, received and routed to the right people via systems of integration. The bottom layer comprises the databases and transactions — the systems of record. 


Why it’s so damn hard to make AI fair and unbiased

The problem is that if there’s a predictable difference between two groups on average, then these two definitions will be at odds. If you design your search engine to make statistically unbiased predictions about the gender breakdown among CEOs, then it will necessarily be biased in the second sense of the word. And if you design it not to have its predictions correlate with gender, it will necessarily be biased in the statistical sense. So, what should you do? How would you resolve the trade-off? Hold this question in your mind, because we’ll come back to it later. While you’re chewing on that, consider the fact that just as there’s no one definition of bias, there is no one definition of fairness. Fairness can have many different meanings — at least 21 different ones, by one computer scientist’s count — and those meanings are sometimes in tension with each other. “We’re currently in a crisis period, where we lack the ethical capacity to solve this problem,” said John Basl, a Northeastern University philosopher who specializes in emerging technologies. So what do big players in the tech space mean, really, when they say they care about making AI that’s fair and unbiased? 


Quantum computing to run economic models on crypto adoption

Indeed, QC makes use of an uncanny quality of quantum mechanics whereby an electron or atomic particle can be in two states at the same time. In classical computing, an electric charge represents information as either an 0 or a 1 and that is fixed, but in quantum computing, an atomic particle can be both a 0 and a 1, or a 1 and a 1, or a 0 and a 0, etc. If this unique quality can be harnessed, computing power explodes manyfold, and QC’s development, paired with Shor’s algorithm — first described in 1994 as a theoretical possibility, but soon to be a wide-reaching reality, many believe — also threatens to burst apart RSA encryption, which is used in much of the internet including websites and email. “Yes, it’s a very tough and exciting weapons race,” Miyano told Cointelegraph. “Attacks — including side-channel attacks — to cryptosystems are becoming more and more powerful, owing to the progress in computers and mathematical algorithms running on the machines. Any cryptosystem could be broken suddenly because of the emergence of an incredibly powerful algorithm.”


Cybercriminals are finding new ways to target cloud environments

Criminals have also shifted their focus from Docker to Kubernetes. Attacks against vulnerable Kubernetes deployments and applications increased to 19% in 2021, up from 9% in 2020. Kubernetes environments are a tempting target, as once an attacker gains initial access, they can easily move laterally to expand their presence. Attacks that affect an entire supply chain have increased over the past few years, and that has been felt across the software supply chain as well. In 2021, attackers aiming at software suppliers as well as their customers and partners employed a variety of tactics, including exploiting open source vulnerabilities, infecting popular open source packages, compromising CI/CD tools and code integrity, and manipulating the build process. Last year, supply-chain attacks accounted for 14.3% of the samples seen from public image libraries. “These findings underscore the reality that cloud native environments now represent a target for attackers, and that the techniques are always evolving,” said Assaf Morag, threat intelligence and data analyst lead for Aqua’s Team Nautilus.


Addressing the last mile problem with MySQL high availability

Because a single database server is shared between a variety of client applications, a single rogue transaction from an unoptimized query could potentially modify millions of rows in one of the databases on the server, causing performance implications for the other databases. These transactions have the potential to overload the I/O subsystem and stall the database server. In this situation, the Orchestrator is unable to get a response from the primary node, and the replicas also face issues in connecting to the primary. This causes the Orchestrator to initiate a failover. This problem is compounded by the application re-trying these transactions upon failure, and stalling the database operations repeatedly. These transactions halt the database for many seconds and the Orchestrator is quick to catch the stalled state and initiate a failover, impacting the general availability of the MySQL platform. We knew that MySQL stores the number of rows modified by any running transaction, and this number can be obtained by querying the trx_rows_modified of the innodb_trx table, in the information_schema database.


California eyes law to protect workers from digital surveillance

The bill would “establish much needed, yet reasonable, limitations on how employers use data-driven technology at work,” Kalra told the Assembly Labor and Employment Committee on Wednesday. “The time is now to address the increasing use of unregulated data-driven technologies in the workplace and give workers — and the state — the necessary tools to mitigate any insidious impacts caused by them.”  The use of digital surveillance software grew during the pandemic as employers sought to track employees’ productivity and activity when working from home, installing software that uses techniques such as keystroke logging and webcam monitoring. Digital monitoring and management is being used across a variety sectors, with warehouse staff, truck drivers and ride-hailing drivers subject to movement and location tracking for example, with decisions around promotions, hiring and even firing made by algorithms in some cases. The bill, which was approved by the committee on a 5-2 vote and now moves to the Appropriations Committee for more debate, makes three core proposals


Data privacy: 5 mistakes you are probably making

It is a mistake to act on laws that apply only in the geographic location of business operations. There might be privacy regulations/compliance issues that apply to a company beyond those that exist where the company is located – for example, a company headquartered in New York might have customers in Europe, and some European data privacy regulations likely would apply beyond any U.S.-based regulations. This is a significant problem with breach response laws. A large number of U.S. organizations follow the requirements only for their own state or territory. There are at least 54 U.S. state/territory breach laws, so this belief could be very costly. Privacy management programs should apply to all applicable laws and regulations of the associated individuals and also synthesize all requirements so that one set of procedures can be followed to address the common requirements, in addition to meeting unique requirements for specific laws. Many organizations are also overconfident that they will not experience a privacy breach, which leaves them unable to respond effectively, efficiently, and fully when a breach does happen.



Quote for the day:

"Leadership is just another word for training." -- Lance Secretan

Daily Tech Digest - April 23, 2022

Return on CI/CD Is Larger than the Business Outcome

In very simple terms, when you adopt CI/CT/CD, every dev work — new feature, bug fix, improvement — is continuously tested and integrated into your “ready to ship” branch and is, well, ready to be released to your customers based on your criteria for delivery. Since new dev work is continuously tested for quality and regressions, you have high confidence to release more frequently. I used to work at a company where, when a critical patch was needed, we just triggered our pipeline, which performed extensive validations involving just a handful of people and, after a short time, we were ready to cut a release. However, for a software organization looking into adopting effective CI/CD, the return on investment (ROI) should not be purely focused on measuring its business outcomes. The DORA metrics can give you a measure of the positive business outcomes from adopting an effective CI/CD process — that is, more frequent releases, faster delivery of changes to customers, fewer bugs and incidents, faster recovery from incidents. On the other hand, and equally important, adopting effective CI/CD has positive outcomes to the development teams as well — that is, it leads to higher innovation, higher throughput, quality and automation mindset, and higher team morale.


Customer experience and data privacy need to go hand-in-hand

Not only are new data privacy laws impacting the future of marketing and advertising to consumers, but new approaches as a means to adhere to data privacy laws from Google and Apple are having an impact as well. However, while these steps are thinly veiled attempts to make it look like data privacy is the concern, it’s yet another attempt by big tech to distract from the issue at hand where the consumer no longer has the say. Tracking customers’ page views, serving up ideas of what they might like in the future and just forgetting to ask what they prefer has become the norm. Brands have a real opportunity to adapt their current infrastructures to build privacy-safe data stores that adhere to compliance and regulations as part of the platform or ecosystem. This allows them to keep using their (first-party) data-driven approach, while allowing consumers to feel assured their data is being protected and they have a voice. It’s the same problem all over again — brands getting excited to capitalize on the latest trends and, in their frenzy, pushing consumer data privacy concerns aside to get there first. 


Why So Many Security Experts Are Concerned About Low-Code/No-Code Apps

Since low-code/no-code platforms often find their way into the enterprise through business units rather than top-down through IT, they can easily slip through the cracks and be missed by security and IT teams. While security teams are in most cases part of the procurement process, it's easy to treat a low-code/no-code platform as just another SaaS application used by the business, not realizing that the result of adopting this platform would be empowering a whole array of new citizen-developers in the business. In one large organization, citizen-developers in the finance team built an expense management application to replace a manual process filled with back-and-forth emails. Employees quickly adopted the application since it made it easier for them to get reimbursed. The finance team was happy because it automated part of its repetitive work. But IT and security were not in the loop. It took some time for them to notice the application, understand that it was built outside of IT, and reach out to the finance team to bring the app under the IT umbrella. Security and IT teams are always in a state where the backlog of concerns is much larger than their ability to invest. 


‘Decentralized’ web3 startups find out the hard way there’s no safety net

The problem, he explains, is that the policies “very specifically do not include digital assets, meaning if the hackers had gotten in and stolen cash [from Axie], it would have been squarely covered by a crime policy.” Since they didn’t, it wasn’t. The challenge for insurers largely ties to the lack of protections that digital assets currently receive from banking regulators. As Wallace explains it, “Some [insurance] markets are open to making some modifications, but I wouldn’t say it’s mainstream at this point” largely because there is no kind of equivalent to the FDIC or the Securities Investor Protection Corporation (SIPC), which partly protect financial institutions in the event that money deposited in a bank or with a broker-dealer is stolen. “That concept does not yet exist in digital,” Wallace says, adding that it’s “probably the most common point of interest of web3 companies.” Insurers hoping for protections to emerge could be waiting a while, given the way things are trending. Consider that earlier this month, the FDIC issued a “financial institution letter” (or FIL) that suggests the agency is still evaluating — and concerned by — the risk posed by crypto assets and that it wants more information about how the institutions it covers can conduct crypto-related activities in a safe and sound manner.


How To Automate Training Programs To Develop Employees' Leadership Skills

When investing in corporate learning, companies expect to make a real impact on business outcomes. Nevertheless, only 1 in 4 senior managers reports that leadership training tangibly influences a company's outcomes (paywall). Corporations spend plenty of resources on traditional employee training based on out-of-date methods. Many courses are considered to be successfully finished without any feedback or post-training assessment. They provide zero or little real knowledge and skills, turning the investment into hemorrhaging time and money. But combining elaborate assessment with any development program boosts bench strength by an average of 30%. The issue is quite hard to address due to the lack of human resources within an organization for nurturing a leadership mindset and supervising. For instance, consider using video courses with personal feedback for each student from a coach. This approach is hard to scale because the trainer's time is limited. This problem can be solved by automating the personal leadership program so that a script carries out the role of trainers and their assistants.


The Role of DevOps in Cloud Security Management

Security on the cloud vs. security of the cloud always needs to be top of mind. Don’t forget that you are responsible for securing your own applications, data, OS, user access, and virtual network traffic. Beyond these, hone up on your configuration basics. More than 5 percent of AWS S3 buckets are misconfigured to be publicly readable. Recently, a simple misconfiguration in Kafdrop revealed the Apache Kafka stacks of some of the world’s largest businesses. While the big three clouds have invested millions to secure their stacks, the PaaS companies don’t have those budgets – so, check, check, and double check. There’s a reason it’s called “zero trust.” With SaaS and web security, again credential protection is key. Each architecture type requires its own type of security – be diligent. For example, a hybrid cloud infrastructure needs a “triple whammy” of security - the on-prem needs to be highly secure with all the ports closed, surface area tracked, and a highly active Security Operations Center (SOC). The public cloud aspect needs to be secured using the latest and greatest security tech available with that public cloud stack. 


Hidden Interfaces for Ambient Computing

While many of today’s consumer devices employ active-matrix organic light-emitting diode (AMOLED) displays, their cost and manufacturing complexity is prohibitive for ambient computing. Yet other display technologies, such as E-ink and LCD, do not have sufficient brightness to penetrate materials. To address this gap, we explore the potential of passive-matrix OLEDs (PMOLEDs), which are based on a simple design that significantly reduces cost and complexity. However, PMOLEDs typically use scanline rendering, where active display driver circuitry sequentially activates one row at a time, a process that limits display brightness and introduces flicker. Instead, we propose a system that uses parallel rendering, where as many rows as possible are activated simultaneously in each operation by grouping rectilinear shapes of horizontal and vertical lines. For example, a square can be shown with just two operations, in contrast to traditional scanline rendering that needs as many operations as there are rows. With fewer operations, parallel rendering can output significantly more light in each instant to boost brightness and eliminate flicker.


Flooded by Event Data? Here’s How to Keep Working

Today’s digital-first organizations need to create superb experiences for their customers — or risk irrelevance. Ideally, this requires resolving any operational issues before the end user has realized there’s something wrong. However, for most organizations, it’s not that easy. Digital operations teams are drowning in a tsunami of events. Existing tooling is unable to cope; manual processes and multiple point solutions translate into interruptions and escalations for overburdened responders. Solving the issues above is where event orchestration can help. Event orchestration enables users to route events toward the most appropriate set of actions. PagerDuty’s event orchestration functionality, for example, analyzes, enriches, determines logic for and automatically acts on events as they occur in real time, within microseconds. This enables our customers to take all the events coming in from 650+ integrations and apply logic and automation to figure out what should be done with each one — what the next best action is — at machine speed. Because we’re able to nest automation together, users can have one automated action, start a diagnostic process, learn more about the event and then use this information to figure out what to do next.


EdgeDB wants to modernize databases for cutting-edge apps

Unsurprisingly, companies are increasingly embracing alternatives to relational databases, like NoSQL. Driven by a lack of scalability with legacy solutions, they’re looking for modern systems — including cloud-based systems — that support scaling while reducing costs and accelerating development. Gartner predicts that 75% of all databases will be migrated to a cloud service by 2022 — highlighting the shift. “The database industry is facing a major shift to a new business model,” Yury Selivanov, the CEO of EdgeDB, a startup creating a next-gen database architecture, told TechCrunch via email. “It’s clear that there is a long tail of small- and medium-sized businesses that need to build software fast and then host their data in the cloud, preferably in a convenient and economical way.” Selivanov touts EdgeDB, which he co-founded in 2019 with Elvis Pranskevichus, as one of the solutions to the legacy database problem. EdgeDB’s open source architecture is relational, but Selivanov says that it’s engineered to solve some fundamental design flaws that make working with databases — both relational and NoSQL — unnecessarily onerous for enterprises.


Overcoming the biggest cyber security staff challenges

Too many people perceive cybersecurity as a complex, technical world dominated by geeks in hoodies. They cannot see the vast opportunity for them to add value with their own skill sets. We need to broaden the vision so that every employee can become a partner in the security family and enrich it with their own talents. Marketers, lawyers, crisis leaders, authors, and game designers can all be part of a holistic security strategy, adding value and reducing risk, without stepping away from their primary passion. ... Too many senior staff are leaving the industry due to stress and overwork. The security leadership role has become incredibly broad, having accountability to protect against risks and threats across the entire business, and yet the team remains a pyramid with a narrow base. By clearly pushing accountability back to the business units to adhere to standards and holding them (rather than the security team) accountable when they fall short, we can free the leadership from much of the stress, minimising staff turnover. 



Quote for the day:

"A tough hide with a tender heart is a goal that all leaders must have." -- Wayde Goodall