Daily Tech Digest - September 28, 2021

How and why automation can improve network-device security

Automating the processes of device discovery and configuration validation allows you to enforce good network security by making sure that your devices and configurations not accidentally leaving any security holes open. Stated differently, the goal of automation is to guarantee that your network policies are consistently applied across the entire network. A router that’s forgotten and left unsecured could be the avenue that bad actors exploit. Once each device on the network is discovered, the automation system downloads its configurations and checks them against the configuration rules that implement your network policies. These policies range from simple things that are not security related, like device naming standards, to essential security policies like authentication controls and access control lists. The automation system helps deploy and maintain the configurations that reflect your policies. ... A network-change and configuration-management (NCCM) system can use your network inventory to automate the backup of network-device configurations to a central repository.


How Unnecessary Complexity Gave the Service Mesh a Bad Name

The difficulty comes from avoiding “retry storms” or a “retry DDoS,” which is when a system in a degraded state triggers retries, increasing load and further decreasing performance as retries increase. A naive implementation won’t take this scenario into account as it may require integrating with a cache or other communication system to know if a retry is worth performing. A service mesh can do this by providing a bound on the total number of retries allowed throughout the system. The mesh can also report on these retries as they occur, potentially alerting you of system degradation before your users even notice. ... The design pattern of sidecar proxies is another exciting and powerful feature, even if it is sometimes oversold and over-engineered to do things users and tech aren’t quite ready for. While the community waits to see which service mesh “wins,” a reflection of the over-hyped orchestration wars before it, we will inevitably see more purpose-built meshes in the future and, likely, more end-users building their own control planes and proxies to satisfy their use cases.


How To Deal With Data Imbalance In Classification Problems?

A classification model is a technique that tries to draw conclusions or predict outcomes based on input values given for training. The input, for example, can be a historical bank or any financial sector data. The model will predict the class labels/categories for the new data and say if the customer will be valuable or not, based on demographic data such as gender, income, age, etc. Target class imbalance is the classes or the categories in the target class that are not balanced. Rao, giving an example of a marketing campaign, said, let’s say we have a classification task on hand to predict if a customer will respond positively to a campaign or not. Here, the target column — responded has two classes — yes or no. So, those are the two categories. In this case, let’s say the majority of the people responded ‘no.’ Meaning, the marketing campaign where you end up reaching out to a lot of customers, only a handful of them want to subscribe, for example, this can be you offering a credit card, a new insurance policy, etc. The one who subscribed or is interested would request more details. 


Motivational debt — it will fix itself, right?

Motivational debt is a hidden cost to product delivery. It’s the rust that is accruing on aged PBIs, the sludge at the bottom of the Sprint Backlog and the creaking of the process when needing to do something new. Technical debt is to quality what motivational debt is to process. It’s important to remember that whilst motivational debt is shouldered by the entire Scrum Team, there is an individual element of accrual to it as well. Both short-term stresses which bounce back quickly (“I didn’t get any sleep last night”) to long-term tensions which don’t (“My parents are ill) all contribute to the motivational complexities of a Scrum Team. Moving to address these actively is an ethical quandary, as individuals have different coping mechanisms, meaning efforts to help may actually exacerbate the issue. Remember that whilst some team members may be feeling down, others may be up, therefore being conscious of the overall direction of pull is vital as a Scrum Master. Holistically, it is fair to say that motivational debt is felt both individually and collectively and it is everyone’s responsibility to create an environment where it can be minimised. But how can you do this?


Waste and inefficiency in outdated government IT systems

Those responsible for addressing the government’s current levels of wasted IT expenditure may find that businesses offer positive, proactive case studies that highlight the value of embracing digital transformation. A 2020 study from Deloitte, for instance, has found that digitally mature companies – those that have embraced various aspects of digital transformation – saw net revenue growth of 45% and net profit growths of 43% compared to industry averages. The same study has found that the benefits of digital maturation are not limited to profits, but to a range of outcomes including increased efficiency, better product and service quality, and higher levels of both customer satisfaction and employee engagement. A study from McKinsey is even more strident, noting that “by digitising information-intensive processes, costs can be cut by up to 90% and turnaround times improved by several orders of magnitude.” Part of the ‘Organising for Digital Delivery Report’ includes a commitment to “investing in developing the technical fluency of senior civil service leadership.” 


Robotic process automation and intelligent automation are accelerating, study finds

Process mining is used to obtain a wide lens over business processes and workflows within a company by examining event logs across systems, including how variable they are and where there are bottlenecks. The less variable the process, the greater its potential candidacy for RPA/IA, though other factors must be considered as well. Task mining is used to understand how a user is interacting with systems and where there are opportunities for automation. Both of the above help identify automation candidates throughout an organization. IDP is a use case of IA and is growing in popularity, as there are so many document-intensive processes across organizations that impact many employees. ... Data governance, visibility of shadow deployments (and having guardrails in place for them), and security are all important to set in place ahead of RPA/IA to ensure architectural readiness. Another challenge is ensuring that the infrastructure is able to handle the increased speed and volume of transactions related to automated processes, whether it’s their own or someone they do business with.


Importance of DevOps Automation in 2021

From a software development perspective, DevOps automation enhances the performance of the engineering teams with the help of top-notch DevOps tools. It encourages cross teams to work together by removing organizational silos. The reduced team inter-dependencies and manual processes for infrastructure management have enabled the software team to focus on frequent releases, receiving quick feedback, and improving user experience. From an organizational point of view, DevOps automation reduces the chances of human errors and saves the time used for error detection with the help of auto-healing features. Additionally, it minimizes the time required for deploying new features significantly and removes any inconsistencies caused due to human errors. Enterprises should first focus on the areas where they face the most challenges. The decision on what to automate depends on their organizational needs and technological feasibility. The DevOps automation teams should be able to analyze which areas of the DevOps lifecycle needs automation.


The biggest problem with ransomware is not encryption, but credentials

The obvious concern about being the victim of a ransomware attack is being locked out from data, applications, and systems – making organizations unable to do business. Then, there is the concern of what an attack is going to cost; the question of whether or not you need to pay the ransomware is being forced by cybercriminal gangs, as 77% of attacks also included the threat of leaking exfiltrated data. Next are the issues of lost revenue, an average of 23 days of downtime, remediation costs, and the impact on the businesses’ reputation. But those are post-attack concerns, and you should, first and foremost, be laser-focused on what effective measures you can you take to stop ransomware attacks. Organizations that are truly concerned about the massive growth in ransomware are working to understand the tactics, techniques and procedures used by threat actors to craft preventative, detective and responsive measures to either mitigate the risk or minimize the impact of an attack. Additionally, these organizations are scrutinizing the technologies, processes and frameworks they have in place, as well as asking the same of their third-party supply chain vendors.

If your organization is looking to hire data engineers in the next 12 months, be prepared to move quickly in your hiring process and think carefully before you waste time negotiating salaries. That’s some of the advice for hiring managers from the first edition of Salaries of Data Engineering Professionals from the quantitative executive recruiting firm Burtch Works. Known for its work with data scientists and analytics professionals, and its annual salary surveys that look at the employment trends for those professionals, this year, Burtch Works has expanded by offering this new survey for data engineers, conducted in individual interviews with 320 of these professionals based in the United States. The survey looks at salaries, demographics, and trends among data engineers. What is a data engineer? These are the professionals responsible for building and managing the data and IT infrastructure that sits between the data sources and the data analytics. They report into the IT department, the data science department, or both. According to the Burtch Works survey, these professionals command a high rate of pay.


Data And Analytics In Healthcare: Addressing the 21st-century Challenges

Scientists have claimed victory against future diseases after successfully decoding the human genome. The marriage of this knowledge to the health data generated by patients would enable clinicians to make better decisions about our care. The two benefits of using predictive analytics: better care and lower costs. The biggest lesson of the recent global health issues such as COVID-19, SARS, dengue and malaria outbreaks is that pharma and healthcare companies cannot afford merely to react to every emerging situation. They need to track several data streams of local, regional, and global trends, create a database, and then predict various scenarios. Data analytics helps companies develop their predictive models, enabling them to make quicker, intelligent decisions, build partnerships, and resolve bottlenecks before the crisis hits the shore. Such data-driven measures aim to save invaluable lives and allow care to be personalized for each individual. Predictive analytics can classify particular risk factors for diverse populations. This is very useful for patients suffering from multiple ailments with complex medical histories. 



Quote for the day:

"Every great leader has incredible odds to overcome." -- Wayde Goodall

Daily Tech Digest - September 27, 2021

How to Get Started With Zero Trust in a SaaS Environment

While opinions vary on what zero trust is and is not, this security model generally considers the user's identity as the root of decision-making when determining whether to allow access to an information resource. This contrasts with earlier approaches that made decisions based on the network from which the person was connecting. For example, we often presumed that workers in the office were connecting directly to the organization's network and, therefore, could be trusted to access the company's data. Today, however, organizations can no longer grant special privileges based on the assumption that the request is coming from a trusted network. With the high number of remote and geographically dispersed employees, there is a good chance the connections originate from a network the company doesn't control. This trend will continue. IT and security decision-makers expect remote end users to account for 40% of their workforce after the COVID-19 outbreak is controlled, an increase of 74% relative to pre-pandemic levels, according to "The Current State of the IT Asset Visibility Gap and Post-Pandemic Preparedness," with research conducted by the Enterprise Strategy Group for Axonius.


Tons Of Data At The Company Store

Confidentially, many chief data officers will admit that their companies suffer from what might euphemistically be called “data dyspepsia:” they produce and ingest so much data that they cannot properly digest it. Like it or not, there is such a thing as too much data – especially in an era of all-you-can-ingest data comestibles. “Our belief is that more young companies die of indigestion than starvation,” said Adam Wilson, CEO of data engineering specialist Trifacta, during a recent episode of Inside Analysis, a weekly data- and analytics-focused program hosted by Eric Kavanagh. So what if Wilson was referring specifically to Trifacta’s decision to stay focused on its core competency, data engineering, instead of diversifying into adjacent markets. So what if he was not, in fact, alluding to a status quo in which the average business feels overwhelmed by data. Wilson’s metaphor is no less apt if applied to data dyspepsia. It also fits with Trifacta’s own pitch, which involves simplifying data engineering – and automating it, insofar as is practicable – in order to accelerate the rate at which useful data can be made available to more and different kinds of consumers.


Hyperconverged analytics continues to guide Tibco strategy

One of the trends we're seeing is that people know how to build models, but there are two challenges. One is on the input side and one is on the output side. On the input side, you can build the greatest models in the world, but if you feed them bad data that's not going to help. So there's a renewed interest around things like data governance, data quality and data security. AI and ML are still very important, but there's more to it than just building the models. The quality of the data, and the governance and processes around the data, are also very important. That way you get your model better data, which makes your model more accurate, and from there you're going to get better outcomes. On the output side, since there are so many models being built, organizations are having trouble operationalizing them all. How do you deploy them into production, how do you monitor them, how do you know when it's time to go back and rework that model, how do you deploy them at the edge, how do you deploy them in the cloud and how do you deploy them in an application? 


Gamification: A Strategy for Enterprises to Enable Digital Product Practices

As digital products take precedence, the software ecosystem brings new possibilities to products. With the rise of digital products, cross-functional boundaries are blurring. New skills and unlearning old ways are critical. Gamification can support creating a ladder approach to acquiring and utilizing new skills for continuous software delivery ecosystems, testing and security. However, underpinning collective wisdom through gamification needs a systematic framework where we are able to integrate game ideation, design, validation & incentives with different persona types. To apply gamification in a systematic manner to solve serious problems, ideate, and come together to create new knowledge in a fun way, is challenging. To successfully apply gamification for upskilling and boosting productivity, it will have to be accompanied by understanding the purposefulness through the following two critical perspectives: Benefits of embracing gamification for people – Removing fear, having fun, and making the desirable shift towards new knowledge; creating an environment that is inclusive and can provide a learning ecosystem for all. 


Artificial Intelligence: The Future Of Cybersecurity?

Cybersecurity in Industry 4.0 can't be tackled in the same way as that of traditional computing environments. The number of devices and associated challenges are far too many. Imagine monitoring security alerts for millions of connected devices globally. IIoT devices possess limited computing power and, therefore, lack the ability to run security solutions. This is where AI and machine learning come into play. ML can make up for the lack of security teams. AI can help discover devices and hidden patterns while processing large amounts of data. ML can help monitor incoming and outgoing traffic for any deviations in behavior in the IoT ecosystem. If a threat or anomaly is detected, alarms can be sent to security admins warning them about the suspicious traffic. AI and ML can be used to build lightweight endpoint detection technologies. This can be an indispensable solution, especially in situations where IoT devices lack the processing power and need behavior-based detection capabilities that aren't as resource intensive. AI and ML technologies are a double-edged sword. 


3 ways any company can guard against insider threats this October

Companies don’t become cyber smart by accident. In fact, cybersecurity is rarely top-of-mind for the average employee as they go about their day and pursue their professional responsibilities. Therefore, businesses are responsible for educating their workforce, training their teams to identify and defend against the latest threat patterns. For instance, phishing scams have increased significantly since the pandemic’s onset, and each malicious message threatens to undermine data integrity. Meanwhile, many employees can’t identify these threats, and they wouldn’t know how to respond if they did. Of course, education isn’t limited to phishing scams. One survey found that 61 percent of employees failed a basic quiz on cybersecurity fundamentals. With the average company spending only 5 percent of its IT budget on employee training, it’s clear that education is an untapped opportunity for many organizations to #BeCyberSmart. When coupled with intentional accountability measures that ensure training is implemented, companies can transform their unaware employees into incredible defensive assets.


VMware gears up for a challenging future

“What we are doing is pivoting our portfolio or positioning our portfolio to become the multi-cloud platform for our customers in three ways,” Raghuram said. “One is enabling them to execute their application transformation on the cloud of their choice using our Tanzu portfolio. And Tanzu is getting increased momentum, especially in the public cloud to help them master the complexities of doing application modernization in the cloud. And of course, by putting our cloud infrastructure across all clouds, and we are the only one with the cloud infrastructure across all clouds and forming the strategic partnerships with all of the cloud vendors, we are helping them take their enterprise applications to the right cloud,” Raghuram said. Building useful modern enterprise applications is a core customer concern, experts say. “Most new apps are built-on containers for speed and scalability. The clear winner of the container wars was Kubernetes,” said Scott Miller, senior director of strategic partnerships for World Wide Technology (WWT), a technology and supply-chain service provider and a VMware partner. 


Software cybersecurity labels face practical, cost challenges

Cost and feasibility are among the top challenges of creating consumer labels for software. Adding to these challenges is the fact that software is continually updated. Moreover, software comes in both open-source and proprietary formats and is created by a global ecosystem of firms that range from mom-and-pop shops all the way up to Silicon Valley software giants. "It's way too easy to create requirements that cannot be met in the real world," David Wheeler, director of open source supply chain security at the Linux Foundation and leader of the Core Infrastructure Initiative Best Practices Badge program, said at the workshop. "A lot of open-source projects allow people to use them at no cost. There's often no revenue stream. You have to spend a million dollars at an independent lab for an audit. [That] ignores the reality that for many projects, that's an impractical burden." ... Another critical aspect of creating software labels is to ensure that they don't reflect static points in time but are instead dynamic, taking into account the fluid nature of software. 


Work’s not getting any easier for parents

Part of many managers’ discomfort with remote work is that they are unsure how to gauge their off-site employees’ performance and productivity. Some business leaders equate face time with productivity. I’ll never forget a visit I had to a Silicon Valley startup in which the manager showing me around described a colleague this way: “He’s such a great worker. He’s here every night until 10, and back in early every morning!” In my work helping businesses update their policies and cultures to accommodate caregivers, I often have to rid managers of this old notion. There’s nothing impressive, or even good, about being in the office so much. To help change the paradigm, I work with managers to find new ways of measuring an individual’s performance and productivity. Instead of focusing on hours worked per day, we look at an employee’s achievements across a broader time metric, such as a month or quarter. We ask, what did the employee do for the company during that time? It’s often then that businesses realize how little overlap there is between those who are seen working the most and those who have the greatest impact on the company. 


How to use feedback loops to improve your team's performance

In systems, feedback is a fundamental force behind their workings. When we fly a plane, we get feedback from our instruments and our co-pilot. When we develop software, we get feedback from our compiler, our tests, our peers, our monitoring, and our users. Dissent works because it’s a form of feedback, and clear, rapid feedback is essential for a well functioning system. As examined in “Accelerate”, a four-year study of thousands of technology organizations found that fostering a culture that openly shares information is a sure way to improve software delivery performance. It even predicts ability to meet non-technical goals. These cultures, known as “generative” in Ron Westrum’s model of organizational culture, are performance–and learning–oriented. They understand that information, especially if it’s difficult to receive, only helps to achieve their mission, and so, without fear of retaliation, associates speak up more frequently than in rule-oriented (“bureaucratic”) or power-oriented (“pathological”) cultures. Messengers are praised, not shot.



Quote for the day:

"A pat on the back is only a few vertebrae removed from a kick in the pants, but is miles ahead in results." -- W. Wilcox

Daily Tech Digest - September 26, 2021

You don't really own your phone

When you purchase a phone, you own the physical parts you can hold in your hand. The display is yours. The chip inside is yours. The camera lenses and sensors are yours to keep forever and ever. But none of this, not a single piece, is worth more than its value in scrap without the parts you don't own but are graciously allowed to use — the copyrighted software and firmware that powers it all. The companies that hold these copyrights may not care how you use the product you paid a license for, and you don't hear a lot about them outside of the right to repair movement. Xiaomi, like Google and all the other copyright holders who provide the things which make a smartphone smart, really only wants you to enjoy the product enough to buy from them the next time you purchase a smart device. Xiaomi pissing off people who buy its smartphones isn't a good way to get those same people to buy another or buy a fitness band or robot vacuum cleaner. When you set up a new phone, you agree with these copyright holders that you'll use the software on their terms.


Edge computing has a bright future, even if nobody's sure quite what that looks like

Edge computing needs scalable, flexible networking. Even if a particular deployment is stable in size and resource requirements over a long period, to be economic it must be built from general-purpose tools and techniques that can cope with a wide variety of demands. To that end, software defined networking (SDN) has become a focus for future edge developments, although a range of recent research has identified areas where it doesn't yet quite match up to the job. SDN's characteristic approach is to divide the task of networking into two tasks of control and data transfer. It has a control plane and a data plane, with the former managing the latter by dynamic reconfiguration based on a combination of rules and monitoring. This looks like a good match for edge computing, but SDN typically has a centralised control plane that expects a global view of all network activity. ... Various approaches – multiple control planes, increased intelligence in edge switch hardware, dynamic network partitioning on demand, geography and flow control – are under investigation, as are the interactions between security and SDN in edge management.


TangleBot Malware Reaches Deep into Android Device Functions

In propagation and theme, TangleBot resembles other mobile malware, such as the FluBot SMS malware that targets the U.K. and Europe or the CovidLock Android ransomware, which is an Android app that pretends to give users a way to find nearby COVID-19 patients. But its wide-ranging access to mobile device functions is what sets it apart, Cloudmark researchers said. “The malware has been given the moniker TangleBot because of its many levels of obfuscation and control over a myriad of entangled device functions, including contacts, SMS and phone capabilities, call logs, internet access, [GPS], and camera and microphone,” they noted in a Thursday writeup. To reach such a long arm into Android’s internal business, TangleBot grants itself privileges to access and control all of the above, researchers said, meaning that the cyberattackers would now have carte blanche to mount attacks with a staggering array of goals. For instance, attackers can manipulate the incoming voice call function to block calls and can also silently make calls in the background, with users none the wiser. 


Why CEOs Should Absolutely Concern Themselves With Cloud Security

Probably the biggest reason cybersecurity needs to be elevated to one of your top responsibilities is simply that, as the CEO, you call most of the shots surrounding how the business is going to operate. To lead anyone else, you have to have a crystal-clear big picture of how everything interconnects and what ramifications threats in one area have to other areas. Additionally, it’s up to you to hire and oversee people who truly understand servers and cloud security and who can build a secure infrastructure and applications. That said, virtually all businesses today are “digital” businesses in some sense, if that means having a website, an app, processing credit cards with point of sale readers or using the ‘net for your social media marketing. All of these things can be potential points of entry for hackers, who happily take advantage of any vulnerability they can find. And with more people working remotely and generally enjoying a more mobile lifestyle, the risks of cloud computing are here to stay.


Better Incident Management Requires More than Just Data

To the uninitiated, all complexity looks like chaos. Real order requires understanding. Real understanding requires context. I’ve seen teams all over the tech world abuse data and metrics because they don’t relate it to its larger context: what are we trying to solve and how might we be fooling ourselves to reinforce our own biases? In no place is this more true in the world of incident management. Things go wrong in businesses, large and small, every single day. Those failures often go unreported, as most people see failure through the lens of blame, and no one wants to admit they made a mistake. Because of that fact, site reliability engineering (SRE) teams establishing their own incident management process often invest in the wrong initial metrics. Many teams are overly concerned with reducing MTTR: mean time to resolution. Like the British government, those teams are overly relying on their metrics and not considering the larger context. Incidents are almost always going to be underreported initially: people don’t want to admit things are going wrong.


Three Skills You’ll Need as a Senior Data Scientist

In the light of data science, I would say, critical thinking is, answering the “why”s in your data science project. Before elaborating what I mean, the most important prerequisite is, know the general flow of a data science project. The diagram below shows that. This is a slightly different view to the cyclic series of steps you might see elsewhere. I think this is a more realistic view than seeing it as a cycle. Now off to elaborating. In a data science project, there are countless decisions you have to make; supervised vs unsupervised learning, selecting raw fields of data, feature engineering techniques, selecting the model, evaluation metrics, etc. Some of these decisions would be obvious, like, if you have a set of features, and a label associated with it, you’d go with supervised learning instead of unsupervised learning. A seemingly tiny checkpoint you overlooked might be enough. And it can cost money for the company and put your reputation on the line. When you answer not just “what you’re doing”, but also “why you’re doing”, it closes down most of the cracks, where problems like above can seep in.


The Benefits and Challenges of Passwordless Authentication

Passwordless authentication is a process that verifies a user's identity with something other than a password. It strengthens security by eliminating password management practices and the risk of threat vectors. It is an emerging subfield of identity and access management and will revolutionize the way employees work. ... asswordless authentication uses some modern authentication methods that reduce the risk of being targeted via phishing attacks. With this approach, employees won't need to provide any sensitive information to the threat actors that give them access to their accounts or other confidential data when they receive a phishing email. ... Passwordless authentication appears to be a secure and easy-to-use approach, but there are challenges in its deployment. The most significant issue is the budget and migration complexity. While setting up a budget for passwordless authentication, enterprises should include costs for buying hardware and its setup and configuration. Another challenge is dealing with old-school mentalities. Most IT leaders and employees are reluctant to move away from traditional security methods and try new ones.


Using CodeQL to detect client-side vulnerabilities in web applications

The idea of CodeQL is to treat source code as a database which can be queried using SQL-like statements. There are lots of languages supported among which is JavaScript. For JavaScript both server-side and client-side flavours are supported. JS CodeQL understands modern editions such as ES6 as well as frameworks like React (with JSX) and Angular. CodeQL is not just grep as it supports taint tracking which allows you to test if a given user input (a source) can reach a vulnerable function (a sink). This is especially useful when dealing with DOM-based Cross Site Scripting vulnerabilities. By tainting a user-supplied DOM property such as location.hash one can test if this value actually reaches one of the XSS sinks, e.g. document.innerHTML or document.write(). The common use-case for CodeQL is to run a query suite against open-source code repositories. To do so you may install CodeQL locally or use https://lgtm.com/. For the latter case you should specify a GitHub repository URL and add it as your project. 


Moving beyond agile to become a software innovator

Experience design is a specific capability focused on understanding user preferences and usage patterns and creating experiences that delight them. The value of experience design is well established, with organizations that have invested in design exceeding industry peers by as much as 5 percent per year in growth of shareholder return. What differentiates best-in-class organizations is that they embed design in every aspect of the product or service development. As a core part of the agile team, experience designers participate in development processes by, for example, driving dedicated design sprints and ensuring that core product artifacts, such as personas and customer journeys, are created and used throughout product development. This commitment leads to greater adoption of the products or services created, simpler applications and experiences, and a substantial reduction of low-value features. ... Rather than approaching it as a technical issue, the team focused on addressing the full onboarding journey, including workflow, connectivity, and user communications. The results were impressive. The team created a market-leading experience that enabled their first multimillion-dollar sale only four months after it was launched and continued to accelerate sales and increase customer satisfaction.


The relationship between data SLAs & data products

The data-as-a-product model intends to mend the gap that the data lake left open. In this philosophy, company data is viewed as a product that will be consumed by internal and external stakeholders. The data team’s role is to provide that data to the company in ways that promote efficiency, good user experience, and good decision making. As such, the data providers and data consumers need to work together to answer the questions put forward above. Coming to an agreement on those terms and spelling it out is called a data SLA. An SLA stands for a service-level agreement. An SLA is a contract between two parties that defines and measures the level of service a given vendor or product will deliver as well as remedies if they fail to deliver. They are an attempt to define expectations of the level of service and quality between providers and consumers. They’re very common when an organization is offering a product or service to an external customer or stakeholder, but they can also be used between internal teams within an organization.



Quote for the day:

"If you can't handle others' disapproval, then leadership isn't for you." -- Miles Anthony Smith

Daily Tech Digest - September 25, 2021

Top 5 Objections to Scrum (and Why Those Objections are Wrong)

Many software development teams are under pressure to deliver work quickly because other teams have deadlines they need to meet. A common objection to Agile is that teams feel that when they have a schedule to meet, a traditional waterfall method is the only way to go. Nothing could be further from the truth. Not only can Scrum work in these situations, but in my experience, it increases the probability of meeting challenging deadlines. Scrum works well with deadlines because it’s based on empiricism, lean thinking, and an iterative approach to product delivery. In a nutshell, empiricism is making decisions based on what is known. In practice, this means that rather than making all of the critical decisions about an initiative upfront, when the least is known, Agile initiatives practice just-in-time decision-making by planning smaller batches of work more often. Lean thinking means eliminating waste to focus only on the essentials, and iterative delivery involves delivering a usable product frequently.


The Future Is Data Center as a Service

The fact is that whether we realize it or not, we’ve gotten used to thinking of the data center as a fluid thing, particularly if we use cluster paradigms such as Kubernetes. We think of pods like tiny individual computers running individual applications, and we start them up and tear them down at will. We create applications using multicloud and hybrid cloud architectures to take advantage of the best situation for each workload. Edge computing has pushed this analogy even further, as we literally spin up additional nodes on demand, with the network adjusting to the new topology. Rightfully so; with the speed of innovation, we need to be able to tear down a data center that is compromised or bring up a new one to replace it, or to enhance it, at a moment’s notice. In a way, that’s what we’ve been doing with public cloud providers: instantiating “hardware” when we need it and tearing it down when we don’t. We’ve been doing this on the cloud providers’ terms, with each public cloud racing to lock in as many companies and workloads as possible with a race to the bottom on cost so they can control the conversation.


DevSecOps: 5 ways to learn more

There’s a clear connection between DevSecOps culture and practices and the open source community, a relationship that Anchore technical marketing manager Will Kelly recently explored in an opensource.com article, “DevSecOps: An open source story.” As you build your knowledge, getting involved in a DevSecOps-relevant project is another opportunity to expand and extend your experience. That could range from something as simple as joining a project’s community group or Slack to ask questions about a particular tool, to taking on a larger role as a contributor at some point. The threat modeling tool OWASP Threat Dragon, for example, welcomes new contributors via its Github and website, including testers and coders.  ... The value of various technical certifications is a subject of ongoing – or at least on-again, off-again – debate in the InfoSec community. But IT certifications, in general, remain a solid complementary career development component. Considering a DevSecOps-focused certification track is in itself a learning opportunity since any credential worth more than a passing glance should require some homework to attain.


How Medical Companies are Innovating Through Agile Practices

Within regulatory constraints, there is plenty of room for successful use of Agile and Lean principles, despite the lingering doubts of some in quality assurance or regulatory affairs. Agile teams in other industries have demonstrated that they can develop without any compromise to quality. Additional documentation is necessary in regulated work, but most of it can be automated and generated incrementally, which is a well-established Agile practice. Medical product companies are choosing multiple practices, from both Agile and Lean. Change leaders within the companies are combining those ideas with their own deep knowledge of their organization’s patterns and people. They’re finding creative ways to achieve business goals previously out of reach with traditional “big design up front” practices. ... Our goal here is to show how the same core principles in Agile and Lean played out in very different day-to-day actions at the companies we profiled, and how they drove significant business goals for each company.


The Importance of Developer Velocity and Engineering Processes

At its core, an organization is nothing more than a collection of moving parts. A combination of people and resources moving towards a common goal. Delivering on your objectives requires alignment at the highest levels - something that becomes increasingly difficult as companies scale. Growth increases team sizes creating more dependencies and communication channels within an organization. Collaboration and productivity issues can quickly arise in a fast-scaling environment. It has been observed that adding members to a team drives inefficiency with negligible benefits to team efficacy. This may sound counterintuitive but is a result of the creation of additional communication lines, which increases the chance of organizational misalignment. The addition of communication lines brought on by organization growth also increases the risk of issues related to transparency as teams can be unintentionally left “in the dark.” This effect is compounded if decision making is done on the fly, especially if multiple people are making decisions independent of each other.


Tired of AI? Let’s talk about CI.

Architectures become increasingly complex with each neuron. I suggest looking into how many parameters GPT-4 has ;). Now, you can imagine how many different architectures you can have with the infinite number of configurations. Of course, hardware limits our architecture size, but NVIDIA (and others) are scaling the hardware at an impressive pace. So far, we’ve only examined the computations that occur inside the network with established weights. Finding suitable weights is a difficult task, but luckily math tricks exist to optimize them. If you’re interested in the details, I encourage you to look up backpropagation. Backpropagation exploits the chain rule (from calculus) to optimize the weights. For the sake of this post, it’s not essential to understand how the learning of the weights, but it’s necessary to know backpropagation does it very well. But, it’s not without its caveats. As NNs learn, they optimize all of the weights relative to the data. However, the weights must first be defined — they must have some value. This begs the question, where do we start?


How do databases support AI algorithms?

Oracle has integrated AI routines into their databases in a number of ways, and the company offers a broad set of options in almost every corner of its stack. At the lowest levels, some developers, for instance, are running machine learning algorithms in the Python interpreter that’s built into Oracle’s database. There are also more integrated options like Oracle’s Machine Learning for R, a version that uses R to analyze data stored in Oracle’s databases. Many of the services are incorporated at higher levels — for example, as features for analysis in the data science tools or analytics. IBM also has a number of AI tools that are integrated with their various databases, and the company sometimes calls Db2 “the AI database.” At the lowest level, the database includes functions in its version of SQL to tackle common parts of building AI models, like linear regression. These can be threaded together into customized stored procedures for training. Many IBM AI tools, such as Watson Studio, are designed to connect directly to the database to speed model construction.


A Comprehensive Guide to Maximum Likelihood Estimation and Bayesian Estimation

An estimation function is a function that helps in estimating the parameters of any statistical model based on data that has random values. The estimation is a process of extracting parameters from the observation that are randomly distributed. In this article, we are going to have an overview of the two estimation functions – Maximum Likelihood Estimation and Bayesian Estimation. Before having an understanding of these two, we will try to understand the probability distribution on which both of these estimation functions are dependent. The major points to be discussed in this article are listed below. ... As the name suggests in statistics it is a method for estimating the parameters of an assumed probability distribution. Where the likelihood function measures the goodness of fit of a statistical model on data for given values of parameters. The estimation of parameters is done by maximizing the likelihood function so that the data we are using under the model can be more probable for the model.


DORA explorers see pandemic boost in numbers of 'elite' DevOps performers

DORA has now added a fifth metric, reliability, defined as the degree to which one "can keep promises and assertions about the software they operate." This is harder to measure, but nevertheless the research on which the report is based asked tech workers to self-assess their reliability. There was a correlation between reliability and the other performance metrics. According to the report, 26 per cent of those polled put themselves into the elite category, compared to 20 per cent in 2019, and seven per cent in 2018. Are higher performing techies more likely to respond to the survey? That seems likely, and self-assessment is also a flawed approach; but nevertheless it is an encouraging trend, presuming agreement that these metrics and survey methodology are reasonable. Much of the report reiterates conventional DevOps wisdom. NIST's characteristics of cloud computing [PDF] are found to be important. "What really matters is how teams implement their cloud services, not just that they are using cloud technologies," the researchers said, including things like on-demand self service for cloud resources.


Why Our Agile Journey Led Us to Ditch the Relational Database

Despite our developers having zero prior experience with MongoDB prior to our first release, they still were able to ship to production in eight weeks while eliminating more than 600 lines of code, coming in under time and budget. Pretty good, right? Additionally, the feedback provided was that the document data model helped eliminate the tedious work of data mapping and modeling they were used to from a relational database. This amounted to more time that our developers could allocate on high-priority projects. When we first began using MongoDB in summer 2017, we had two collections into production. A year later, that had grown into 120 collections deployed into production, writing 10 million documents daily. Now, each team was able to own its own dependency, have its own dedicated microservice and database leading to a single pipeline for application and database changes. These changes, along with the hours saved not spent refactoring our data model, allowed us to cut our deployment time to minutes, down from hours or even days.



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - September 24, 2021

Chef Shifts to Policy as Code, Debuts SaaS Offering

As for ease of use, Chef Enterprise Automation Stack (EAS) will also be available in both AWS and Azure marketplaces. The company has begun a Chef Managed Services program, and Chef EAS is also now available in a beta SaaS offering. All of these together, said Nanjundappa, will make Chef EAS “easy to access and adopt, which will help reduce overall time to value.” Looking forward, Nanjundappa said that the focus will include features like cloud security posture management (CSPM) and Kubernetes security. “We are seeing more and more compute workloads being migrated towards containers and Kubernetes. We currently offer Chef Inspec + content for CIS profiles for K8s and Docker that help secure Containers and Kubernetes,” wrote Nanjundappa. “But we will be adding additional abilities to maintain security posture in containers and Kubernetes platforms in the coming years.” More specifically, upcoming Kubernetes features will offer visibility into containers and the Kubernetes environment, scanning for common misconfigurations, vulnerability management, and runtime security.


Private vs. Public Blockchains For Enterprise Business Solutions

Not all blockchains are created equal. Businesses have always required a reasonable degree of privacy as well as control over their networks. Since the popularisation of the internet, and the advance of eCommerce, it’s been essential that companies protect their systems from outside attackers, both to preserve their workflow but also any sensitive information they might be storing. Hence, as blockchain technology becomes integrated into the modern digital workplace, it is only logical that private networks are often seen as preferable for many organizations. This is no big surprise — especially given that some of the main selling points of blockchain include a completely transparent ledger containing all data as well as the ability to move value around. And it’s clear why a business wouldn’t want just anyone to be able to access their internal network. This way, the company gets many of the benefits of the novel tech but can remain opaque to most of the world. It’s also quite valid that private blockchains are typically much more efficient than public ones. 


10 top API security testing tools

Many organizations likely don’t know how many APIs they are using, what tasks they are performing, or how high a permission level they hold. Then there is the question of whether those APIs contain any vulnerabilities. Industry and private groups have come up with API testing tools and platforms to help answer those questions. Some testing tools are designed to perform a single function, like mapping why specific Docker APIs are improperly configured. Others take a more holistic approach to an entire network, searching for APIs and then providing information about what they do and why they might be vulnerable or over-permissioned. Several well-known commercial API testing platforms are available as well as a large pool of free or low-cost open-source tools. The commercial tools generally have more support options and may be able to be deployed remotely though the cloud or even as a service. Some open-source tools may be just as good and have the backing of the community of users who created them. Which one you select depends on your needs, the security expertise of your IT teams, and budget.


Implementing risk quantification into an existing GRC program

How do risk professionals quantify risk? Using dollars and cents. Taking the information gathered in the Open FAIR model simulations, risk quantification further breaks down primary and secondary losses into six different types for each loss, allowing the organization to determine how best to categorize them. CISOs and other risk professionals can consider data points from the market, their data and additional available information. They can classify each type of data they’re inputting as high or low confidence. Primary loss equals anything that’s a direct loss to the company due to a specific event. Secondary loss includes something which may or may not occur, like reputational damage or potential lost revenue. Risk quantification also enables risk professionals to communicate risk to leaders and other stakeholders in a shared language everyone understands: dollars and cents. Quantifying risk in financial terms enables organizations to assess where their biggest loss exposures may be, conduct cost-benefit analyses for those initiatives designed to improve risk activities, and prioritize those risk mitigation activities based on their impact to the business.


The Architecture of a Web 3.0 application

Unlike Web 2.0 applications like Medium, Web 3.0 eliminates the middle man. There’s no centralized database that stores the application state, and there’s no centralized web server where the backend logic resides. Instead, you can leverage blockchain to build apps on a decentralized state machine that’s maintained by anonymous nodes on the internet. By “state machine,” I mean a machine that maintains some given program state and future states allowed on that machine. Blockchains are state machines that are instantiated with some genesis state and have very strict rules (i.e., consensus) that define how that state can transition. Better yet, no single entity controls this decentralized state machine — it is collectively maintained by everyone in the network. And what about a backend server? Instead of how Medium’s backend was controlled, in Web 3.0 you can write smart contracts that define the logic of your applications and deploy them onto the decentralized state machine. This means that every person who wants to build a blockchain application deploys their code on this shared state machine.


A Major Advance in Computing Solves a Complex Math Problem 1 Million Times Faster

That's an exciting development when it comes to tackling the most complex computational challenges, from predicting the way the weather is going to turn, to modeling the flow of fluids through a particular space. Such problems are what this type of resource-intensive computing was developed to take on; now, the latest innovations are going to make it even more useful. The team behind this new study is calling it the next generation of reservoir computing. "We can perform very complex information processing tasks in a fraction of the time using much less computer resources compared to what reservoir computing can currently do," says physicist Daniel Gauthier, from The Ohio State University. "And reservoir computing was already a significant improvement on what was previously possible." Reservoir computing builds on the idea of neural networks – machine learning systems based on the way living brains function – that are trained to spot patterns in a vast amount of data.


Enterprise data management: the rise of AI-powered machine vision

The process of training machine learning algorithms is dramatically hindered for firms acquiring and centralising petabytes of unstructured data – whether video, picture, or sensor data. The AI development pipeline and production model tweaking are both delayed as a result of this centralised data processing method. In an industrial setting, this could result in product faults being overlooked, causing considerable financial loss or even putting lives in peril. Recently, distributed, decentralised architectures have become the preferred choice among businesses, resulting in most data being kept and processed at the edge to overcome the delay and latency challenges and address issues associated with data processing speeds. Deployment of edge analytics and federated machine learning technologies is bringing notable benefits while tackling the inherent security and privacy deficiencies of centralised systems. Take, for example, a large-scale surveillance network that continuously records video. Instead of focusing on hours of film of an empty building or street, effectively training an ML model to differentiate between certain items needs the model to assess footage in which anything new is observed.


The evolution of DRaaS

In the days in which DRaaS was born, it was not unusual for companies to maintain duplicate sets of hardware in an off-site location. Yes, they could replicate the data from their production site to the off-site location, but the expense of procuring and maintaining the secondary site was prohibitive. This led many to use the secondary location for old and retired hardware or even to use less powerful computer systems and less efficient storage to save money. DRaaS is essentially DR delivered as a service. Expert third-party providers either delivered tools or services, or both, to enable organizations to replicate their workloads to data centers managed by those providers. This cloud-based model allowed for increased agility than previous iterations of DR could easily allow, empowering businesses to run in a geographically different location as close to normal as possible while the original site was made ready for operations again. And technology improvements over the course of the 2010s only made the failover and failback process more seamless and granular.


JLL CIO: Hybrid Work, AI, and a Data and Tech Revolution

Offices typically offer multiple services, Wagoner explains. For instance, someone puts the paper in the printers. Someone helps employees with laptop problems. Someone runs the on-site cafeteria. Someone maintains the temperature and air quality of the office. As an employee, if there’s an issue, you need to go to a different group for each one of these different issues. However, JLL’s vision is to remove that friction and collect all those services into a single interface experience app for employees. “With the experience app, we eliminate you having to know that you need to go to office services for one thing and then remember the URL for the IT help desk for another thing,” Wagoner says. “We don’t even necessarily replace any of the existing technology. We just give the end user a much better, easier experience to get to what they need.” This experience app is called “Jet,” and it also can inform workers of rules for particular buildings during the pandemic. For instance, if you book a desk in a building or as you approach a building it might tell you if that building has a vaccine requirement or a masking requirement.


Intel: Under attack, fighting back on many fronts

Each processor architecture has strengths and weaknesses, and all are better or best suited to specific use cases. Intel’s XPU project, announced last year, seeks to offer a unified programming model for all types of processor architectures and match every application to its optimal architecture. XPU means you can have x86, FPGA, AI and machine-language processors, and GPUs all mixed into your network, and the app is compiled to the best suited processor for the job. That is done through the oneAPI project, which goes hand-in-hand with XPU. XPU is the silicon part, while oneAPI is the software that ties it all together. oneAPI is a heterogeneous programming model with code written in common languages such as C, C++, Fortran, and Python, and standards such as MPI and OpenMP. The oneAPI Base Toolkit includes compilers, performance libraries, analysis and debug tools for general purpose computing, HPC, and AI. It also provides a compatibility tool that aids in migrating code written in Nvidia’s CUDA to Data Parallel C++ (DPC++), the language of Intel’s GPU.



Quote for the day:

"Don't measure yourself by what you have accomplished. But by what you should have accomplished with your ability." -- John Wooden

Daily Tech Digest - September 23, 2021

The ‘Great Resignation’ is coming for software development

Companies of all sizes should be strategic about the use of developer time. Why waste human resources and attention on tasks that can be done quickly and less expensive through automation instead? The cost of a developer minute is roughly $1.65, and the cost of a compute minute for automating a formerly manual process is approximately $0.006. Bear in mind the human cost of developers working on routine, low-impact, uninteresting activities, and it’s neither a good use of engineering skills, time, or attention for someone highly trained to stay motivated. Instead, automate core building blocks as much as possible. Implement solutions that integrate easily with other tooling or processes. Remove friction for onboarding new developers allows for a simple life. A simple life means developers are innovating, not toiling. A good place to start if you haven’t already is with CI/CD. A reliable build tool allows teams to automate their processes and practice good hygiene. That way, when systems become more complex, your business will have a foundation in place to handle them (you can thank me later).


The Value Creation System

The Value Equation provides the foundational point of reference for an enterprise, both as a driver and as a constraint for its modus operandi. Bound within the confines of the Value Equation, the enterprise emerges as a conduit for value creation – essentially, as a Value Creation System made up of myriad fixed and moving parts which collude and collide to generate the products or services offered to the market. In fact, the enterprise closely resembles a living, breathing organism, in that it can self-organize, learn, adapt, diversify, specialize, and evolve “emergent properties” such as innovative thinking and conscious risk-taking behaviors. As a result, an enterprise is considered to be a complex adaptive system. What distinguishes an enterprise from other complex adaptive systems such as the stock market or the cells in an organism is the fact that it is deliberately organized around the creation of value. The enterprise is essentially a Value Creation System designed to ingest ‘raw resources’ such as data, materials, capital and labor power, and produce outputs – services, products, information – useful to and desired by their customers.


14 things you need to know about data storage management

“Setting the right data retention policies is a necessity for both internal data governance and legal compliance,” says Chris Grossman, senior vice president, Enterprise Applications, Rand Worldwide and Rand Secure Archive, a data archiving and management solution provider. “Some of your data must be retained for many years, while other data may only be needed for days.” “When setting up processes, identify the organization’s most important data and prioritize storage management resources appropriately,” says Scott-Cowley. “For example, email may be a company’s top priority, but storing and archiving email data for one particular group, say the executives, may be more critical than other groups,” he says. “Make sure these priorities are set so data management resources can be focused on the most important tasks.” ... Similarly, “look for a solution that provides the flexibility to choose where data is stored: on premise and/or in the cloud,” says Jesse Lipson, founder of ShareFile and VP & GM of Data Sharing at Citrix. “The solution should allow you to leverage existing investments in data platforms such as network shares and SharePoint.”


Big Tech & Their Favourite Deep Learning Techniques

A subsidiary of Alphabet, DeepMind remains synonymous with reinforcement learning. From AlphaGo to MuZero and the recent AlphaFold, the company has been championing breakthroughs in reinforcement learning. AlphaGo is a computer program to defeat a professional human Go player. It combines an advanced search tree with deep neural networks. These neural networks take a description of the Go board as input and process it through a number of different network layers containing millions of neuron-like connections. The way it works is — one neural network ‘policy network’ selects the next move to play, while the other neural network, called the ‘value network,’ predicts the winner of the game. ... Facebook is ubiquitous to self-supervised learning techniques across domains via fundamental, open scientific research. It looks to improve image, text, audio and video understanding systems in its products. Like its pretrained language model XLM, self-supervised learning is accelerating important applications at Facebook today — like proactive detection of hate speech. 


New Nagios Software Bugs Could Let Hackers Take Over IT Infrastructures

As many as 11 security vulnerabilities have been disclosed in Nagios network management systems, some of which could be chained to achieve pre-authenticated remote code execution with the highest privileges, as well as lead to credential theft and phishing attacks. Industrial cybersecurity firm Claroty, which discovered the flaws, said flaws in tools such as Nagios make them an attractive target owing to their "oversight of core servers, devices, and other critical components in the enterprise network." The issues have since been fixed in updates released in August with Nagios XI 5.8.5 or above, Nagios XI Switch Wizard 2.5.7 or above, Nagios XI Docker Wizard 1.13 or above, and Nagios XI WatchGuard 1.4.8 or above. "SolarWinds and Kaseya were likely targeted not only because of their large and influential customer bases, but also because of their respective technologies' access to enterprise networks, whether it was managing IT, operational technology (OT), or internet of things (IoT) devices," Claroty's Noam Moshe said in a write-up published Tuesday, noting how the intrusions targeting the IT and network management supply chains emerged as a conduit to compromise thousands of downstream victims.


Practical API Design Using gRPC at Netflix

Alex Borysov and Ricky Gardiner, senior software engineers at Netflix, note that API clients often do not use all the fields present in the responses to their requests. This transmission and computation of irrelevant information for one specific request can waste bandwidth and computational resources, increase the error rate, and increase the overall latency. The authors argue that such waste can be avoided when API clients specify which fields are relevant to them with every request. They point out that this feature is present out of the box with API standards such as GraphQL and JSON:API and question whether Netflix's wide usage of gRPC in the backend could benefit from an identical mechanism. They found that a particular message called FieldMask is defined in Protobuf, the underlying message encoding of gRPC. When included in API requests, it allows clients to list which fields are relevant and can be applied to both read and modify operations.


Ransomware is Harming Cybersecurity Strategy: What Can Organizations Do?

The answer is to layer up best-in-class protection across endpoints, servers, cloud platforms, web and email gateways, and networks. But the secret sauce in all this must be intelligence. It should help organizations understand where their highest risk vulnerabilities are internally. It can also drive visibility into broader threat activity outside the corporate perimeter—whether it’s chatter on dark web forums or new registrations of phishing sites. With open APIs and automation, organizations can integrate this intelligence seamlessly into their best-of-breed security environment, freeing up analysts to focus on high-value tasks and accelerating detection and response times. For example, a new phishing site IP address could be blocked in minutes before the group behind it has even been able to send your employees scam emails. Likewise, intelligence on new ransomware IOCs could be fed into intrusion prevention tools to enhance resilience before you’re even attacked. The right threat intel can also help red teams probe for weaknesses and proactively build stronger defenses.


To build trust with employees, be consistent

A lot of leaders seem to think they also walk the talk on culture. PwC’s survey shows that 73% of senior management think they do. But only 46% of the rest of the workforce agree. We’ve seen firsthand that this mismatch damages trust. And without trust, it can be difficult to motivate people, bring about change, and encourage the desired behaviors. One of our team members at the Katzenbach Center, a former US soldier, tells a story that accentuates the importance of leadership authenticity. In the armed forces, which rely on the ranks obeying their leaders’ instructions without question, Army leaders routinely make sure they eat only after their troops have been fed, to give a clear signal that the troops’ welfare is their top priority. But on one occasion when our colleague was a first lieutenant in the 25th Infantry Division, his entire unit was locked down because a piece of equipment was missing. “The lockdown went on all day and into the evening, and instead of hot food, we were given MRE [meal ready-to-eat] rations. But then some of the soldiers saw the commander’s wife sneaking him Burger King. After that, he was completely ineffective as a leader because no one in the unit respected him.”


What is a Blockchain and how does it work on Bitcoin?

The origins of Blockchain go back to 1991 when Stuart Haber and W. Scott Stornetta described the first work on a chain of cryptographically secured blocks. In this study, Haber and Stornetta sought to create mechanisms to create digital seals and order registered files in a unique and secure way. This represented a practical computational solution for the order and handling of digital documents so that they could not be modified or manipulated. However, its boom increased in 2008 with the arrival of the cryptocurrency Bitcoin , although it is already being used for other commercial applications, so much so that an annual growth of 51% is estimated for 2022. ... Even with these security locks, it would be possible that someone using a computer that has the ability to calculate hundreds of fingerprints per second can modify the fingerprints of the front and rear block, and thinking about this possible problem the Blockchain has a mechanism called " proof of work ", which consists of purposely delaying the process of creating the new block of information, in other words, before creating a new block the system would audit the entire chain originally created. ...


Russian-Linked Group Using Secondary Backdoor Against Targets

The newly discovered backdoor, which the researchers call "TinyTurla," has been deployed against targets in the U.S. and Germany over the last two years. More recently, however, Turla has used the malware against government organizations and agencies in Afghanistan before the country was overtaken by the Taliban in August, according to the report. "This malware specifically caught our eye when it targeted Afghanistan prior to the Taliban's recent takeover of the government there and the pullout of Western-backed military forces," according to the analysis. "Based on forensic evidence, Cisco Talos assesses with moderate confidence that this was used to target the previous Afghan government." Turla has been active since the mid-1990s and is one of the oldest operating advanced persistent threat groups that have links to Russia's FSB - formerly KGB - according to a study published in February by security researchers at VMware. The group, which typically targets government or military agencies, is also called Belugasturgeon, Ouroboros, Snake, Venomous Bear and Waterbug and is known for constantly changing techniques and methods to avoid detection.



Quote for the day:

"Risks are the seeds from which successes grow." -- Gordon Tredgold