Daily Tech Digest - September 19, 2021

The main types of IoT sensors in the market today

IoT sensors are now available in a variety of sizes, as well, allowing for more portability and increased ease of use. This, in turn, has played a key role in establishing new use cases, which have entered various industries. “Sensors have developed from electro mechanical devices, to micro electro mechanical devices (MEMS), to nano electro mechanical devices (NEMS) and now Bio-MEMS, which can sense molecular level changes,” said Deepak Parameswaran, chief business officer for Industry NxT at Mindtree. ... Manufacturing is another industry with plenty of use cases for IoT sensor technology, as well as much potential going forward. With the Industrial Internet of Things (IIoT) aiding innovation in a sector that’s been otherwise slower to adopt digital processes, this trend is showing no signs of slowing down. Richard Simmons, group vice-president of technology – IoT at Logicalis Group, explained: “One of our customers had dedicated employees to go around with clipboards and climb up cranes and large, often dangerous, equipment just to write down how long it was used for. Then they go back to their office and record it.


IoT Will Spur Diversification In Indian Telecom

Specifically, narrowband IoT (NB-IoT) is unleashing powerful machine and sensor connectivity, delivering specific data, low latency, and increased power efficiency. And, it’s likely to drive millions of different types of connections and use cases. Connecting billions of devices presents challenges due to several concerns -- security, standardisation, authentication, and ubiquitous connectivity, the number one roadblock when deploying IoT. Nowhere is this dynamic more apparent than in India, a country largely connected using inadequate terrestrial telecom networks and very limited coverage across India’s vast hinterlands. Today, connectivity remains intermittent at best, often failing totally, while many still experience non-existent coverage in remote areas, where remote farms operate, at the borders, at rural power line stations, at last-mile distribution centres, far out to sea, and many other industrial operations. Even as IoT deployments grow to connect billions of machines, the increased volume of devices will take the deployments into remote parts, where they will experience little or no connectivity - and what connectivity is available will not be affordable.


Don’t Leave Your APIs Undefended Against Security Risks

Speaking of silos, disparate security approaches also create silos that can affect visibility. This can hinder threat detection and complicate the organization’s ability to see the full scope of a security incident. When creating a cloud security strategy, DevOps teams should consider adopting and implementing consistent policies that work in, across and outside of cloud environments. Use tools that allow for security configurations that can be centrally applied, tested and updated, and that support creating a consolidated view of the threats you face. This kind of consolidated view will also help security teams focus more on response and less on collecting information. A security platform that includes WAAP functionality combined with a common management, analysis and orchestration interface can help. This platform approach should include API security controls that can be deployed for every exposed API, which could include APIs deployed in multicloud and hybrid environments. The solutions you implement should also have the ability to block API threats using a WAF or other API gateway. 


Kamikaze satellites and shuttles adrift: Why cyberattacks are a major threat to humanity's ambitions in space

Although there are currently no known examples of cybercriminals hacking directly into satellites, vulnerabilities in the user and ground segments have been exploited in attempt to alter the flight path of satellites in orbit. “By design, every piece of infrastructure has entry points, each of which has the potential to create opportunities for attackers,” said Yamout. “On Earth, with all the advancements and new technologies, we have a relatively good level of security protection. But in space systems, the protections are much more basic.” “With evolving technology and science, it is likely we will visit space more than we used to. Cybersecurity has to be considered when designing space systems in all layers and must integrate in all segments and phases of the space domain evolution.” No matter how well space infrastructure is protected, however, criminals will find a way to launch attacks. The question then becomes: who and why? At the moment, the incentives for cyber actors to launch attacks against space infrastructure are relatively few.


Computer vision can help spot cyber threats with startling accuracy

The traditional way to detect malware is to search files for known signatures of malicious payloads. Malware detectors maintain a database of virus definitions which include opcode sequences or code snippets, and they search new files for the presence of these signatures. Unfortunately, malware developers can easily circumvent such detection methods using different techniques such as obfuscating their code or using polymorphism techniques to mutate their code at runtime. Dynamic analysis tools try to detect malicious behavior during runtime, but they are slow and require the setup of a sandbox environment to test suspicious programs. In recent years, researchers have also tried a range of machine learning techniques to detect malware. These ML models have managed to make progress on some of the challenges of malware detection, including code obfuscation. But they present new challenges, including the need to learn too many features and a virtual environment to analyze the target samples. Binary visualization can redefine malware detection by turning it into a computer vision problem.


The New Wave of Web 3.0 Metaverse Innovations

The term “metaverse” was coined by science fiction writer Neal Stephenson in his book Snow Crash. He described a popular virtual world experienced in the first person by users equipped with augmented reality technology. This idea was taken a step further in Ready Player One by Ernest Cline. He defined it as a fully realized digital world that exists beyond the analog one in which we live. While this futuristic vision may seem far-fetched, the reality which is beginning to take shape is just as exciting. Hailed as the next iteration of the internet, the metaverse sits at the intersection of the web, augmented reality and the blockchain. Until recently, you could only experience the internet when you go to it through a browser or app. The metaverse, with its wide range of connectivity types, devices and technologies will allow us to experience the internet on a huge variety of devices — from commonplace screens and cell phones to VR (virtual reality) and AR (augmented reality). Ultimately, the metaverse will allow us to play games, shop, trade, chat, work or even attend concerts. 


Artificial Intelligence in Finance: Opportunities and Challenges

One of the crucial applications of machine learning in the financial industry is credit scoring. Many financial institutions, be it large banks or smaller fintech companies, are in the business of lending money. And to do so, they need to accurately assess the creditworthiness of an individual or another company. Traditionally, such decisions were made by analysts after conducting an interview with an individual and gathering the relevant data points. However, artificial intelligence allows for a faster and more accurate assessment of a potential borrower, using more complex methods in comparison to the scoring systems of the past. ... Given how inflation is affecting our savings and the fact that it is no longer profitable to keep the money in a savings account, more and more people are interested in passive investing. And this is exactly where robo-advisors come into play. They are wealth management services in which AI puts together portfolio recommendations based on the investors’ individual goals (both short- and long-term), risk preferences, and disposable income.


HowTo: Accelerate the Enterprise Journey to Passwordless

Adopting passwordless requires trust in authentication. The number one concern raised in conversations around passwordless is this: what happens when this new factor is compromised? The answer lies in the next set of security benefits from passwordless. Pair strong user authentication with device authentication. By configuring workflows with rules, correlation, and policies, at-risk authentications can be identified and blocked, such as people using suspicious or new devices. More mature approaches will include user behavior analytics. Consider a criminal who is cloning or spoofing a person’s biometrics. With device authentication, the adversary will also need to compromise the person’s phone and computer without being detected. With behavior analytics, the criminal will also need to open apps that the person normally uses during typical work hours — again, undetected. This increases the complexity required for an attack, increasing the organization’s likelihood of recognizing and responding before the attempt is successful. Increasing trust in authentication creates barriers for criminals. It reduces risk and enables us to investigate factors other than passwords.


Democratisation of AI is crucial to harmonising omnichannel customer experience

Today AI is merely a tool, but in the near future, AI will become a new corporate competency that is crucial to the delivery of a consistent CX through every customer touchpoint. This core competency is the ability to get real-time data from the market and execute real-time decisions. Adopting and using Business AI throughout the enterprise to automate business decisions will help companies develop this corporate competency. This is critically important to delivering a consistent CX because customer expectation is so ephemeral. Every intent signal, transaction data, customer interaction insight, real-time materials cost, market volatility, inflationary pressure, and even competitive moves can potentially change a customer’s expectation. Without AI it’s virtually impossible to keep up with the dynamics of customer expectation. While this new AI competency will be important for every business, it’s often cost restrictive to develop in-house. The teams, systems, and infrastructure required to test, manage, secure and maintain proprietary AI systems can oftentimes turn the deployment of AI into a full-blown R&D operation. 


Celebrating AI-infused talent management at the Eightfold conference

Achieving greater efficiency and scale is the most significant benefit HR teams say AI provides today. AI also enables companies to reduce turnover because it allows them to build employee career paths and present growth opportunities. When internal mobility is high and turnover is low, HR teams can focus their time and resources on scaling the organization. ...  AI can’t solve all the problems HR faces; however, it can provide contextual data and intelligence to help reframe a problem, so HR teams know what needs to be solved. Contextual intelligence is the goal, with AI supporting HR teams’ experience, insights, and intuition. ... Talent mobility, diversity, equity and inclusion, talent acquisition, talent management, and governance were the leading topics covered in the 33 sessions. Based on customer presentations, it’s clear Eightfold is concentrating on helping their customers accelerate and improve talent acquisition. Customers including Dexcom and Micron explained how they’re relying on Eightfold for each stage of talent acquisition, including sourcing, screening, interview scheduling, diversity hiring, candidate experience, candidate relationship management, and on-campus hiring.



Quote for the day:

"Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance. " -- Thom S. Rainer

Daily Tech Digest - September 18, 2021

10 Steps to Simplify Your DevSecOps

Automation is key when balancing security integrations with speed and scale. DevOps adoption already focuses on automation, and the same holds true for DevSecOp. Automating security tools and processes ensures teams are following DevSecOps best practices. Automation ensures that tools and processes are used in a consistent, repeatable and reliable manner. It’s important to identify which security activities and processes can be completely automated and which require some manual intervention. For example, running a SAST tool in a pipeline can be automated entirely; however, threat modeling and penetration testing require manual efforts. The same is true for processes. A successful automation strategy also depends on the tools and technology used. One important automation consideration is whether a tool has enough interfaces to allow its integration with other subsystems. For example, to enable developers to do IDE scans, look for a SAST tool that has support for common IDE software. Similarly, to integrate a tool into a pipeline, review whether he tool offers APIs or webhooks or CLI interfaces that can be used to trigger scans and request reports.


Next-Generation Layer-One Blockchain Protocols Remove the Financial Barriers to DeFi & NFTs

The rapidly expanding world of DeFi is singlehandedly reshaping the global financial infrastructure as all manner of stocks, securities and transferable assets are slowly but surely being tokenized and stored in digital wallets. New protocols are arising daily that allow anyone with an internet connection or smartphone to access ecosystems that are equivalent to digital savings accounts but offer much more attractive yields than those found in the traditional banking sector. Unfortunately, with most of the top DeFi protocols currently operating on the Ethereum blockchain, the high cost of conducting transactions on the network has priced out ordinary individuals living in countries where even a $5 transaction fee is a significant amount of money capable of feeding a family for a week. This is where competing new blockchain platforms have the biggest opportunity for growth and adoption thanks to cross-chain bridges, a growing number of opportunities to earn a yield on new DeFi protocols and significantly smaller transaction cost.


The Dance Between Compute & Network In The Datacenter

In an ideal world, there is a balance between compute, network, and storage that allows for the CPUs to be fed with data such that they do not waste too much of their processing capacity spinning empty clocks. System architects try to get as close as they can to the ideals, which shift depending on the nature of the compute, the workload itself, and the interconnects across compute elements — which are increasingly hybrid in nature. We can learn some generalities from the market at large, of course, which show what people do as opposed to what they might do in a more ideal world than the one we all inhabit. We tried to do this in the wake of Ethernet switch and router stats and server stats for the second quarter being released by the box counters at IDC. We covered the server report last week, noting the rise of the single-socket server, and now we turn to the Ethernet market and drill down into the datacenter portion of it that we care about greatly and make some interesting correlations between compute and network.


ZippyDB: The Architecture of Facebook’s Strongly Consistent Key-Value Store

A ZippyDB deployment (named "tier") consists of compute and storage resources spread across several regions worldwide. Each deployment hosts multiple use cases in a multi-tenant fashion. ZippyDB splits the data belonging to a use case into shards. Depending on the configuration, it replicates each shard across multiple regions for fault tolerance, using either Paxos or async replication. A subset of replicas per shard is part of a quorum group, where data is synchronously replicated to provide high durability and availability in case of failures. The remaining replicas, if any, are configured as followers using asynchronous replication. Followers allow applications to have many in-region replicas to support low-latency reads with relaxed consistency while keeping the quorum size small for lower write latency. This flexibility in replica role configuration within a shard allows applications to balance durability, write performance, and read performance depending on their needs. ZippyDB provides configurable consistency and durability levels to applications, specified as options in read and write APIs. For writes, ZippyDB persists the data on a majority of replicas' by default. 


CISA, FBI: State-Backed APTs May Be Exploiting Critical Zoho Bug

The FBI, CISA and the U.S. Coast Guard Cyber Command (CGCYBER) warned today that state-backed advanced persistent threat (APT) actors are likely among those who’ve been actively exploiting a newly identified bug in a Zoho single sign-on and password management tool since early last month. At issue is a critical authentication bypass vulnerability in Zoho ManageEngine ADSelfService Plus platform that can lead to remote code execution (RCE) and thus open the corporate doors to attackers who can run amok, with free rein across users’ Active Directory (AD) and cloud accounts. The Zoho ManageEngine ADSelfService Plus is a self-service password management and single sign-on (SSO) platform for AD and cloud apps, meaning that any cyberattacker able to take control of the platform would have multiple pivot points into both mission-critical apps (and their sensitive data) and other parts of the corporate network via AD. It is, in other words, a powerful, highly privileged application which can act as a convenient point-of-entry to areas deep inside an enterprise’s footprint, for both users and attackers alike.


Algorithmic Thinking for Data Science

Generalizing the definition and implementation of an algorithm is algorithmic thinking. What this means is, if we have a standard of approaching a problem, say a sorting problem, in situations where the problem statement changes, we would not have to completely modify the approach. There will always be a starting point to attack the new problem set. That’s what algorithmic thinking does: it gives a starting point. ... Why is the calculation of time and space complexities important, now more than ever? It has to do with something we discussed earlier – the amount of data getting processed today. To explain this better, let us walk through a few examples that will showcase the importance of large amounts of data in algorithm building. The algorithms that we casually create for problem-solving in a classroom are very different from what the industry requires when the amount of data being processed is more than a million times what we deal with, in test scenarios. And time complexities are always seen in action when the input size is significantly larger.


Forget Microservices: A NIC-CPU Co-Design For The Nanoservices Era

Large applications hosted at the hyperscalers and cloud builders — search engines, recommendation engines, and online transaction processing applications are but three good examples — communicate using remote procedure calls, or RPCs. The RPCs in modern applications fan out across these massively distributed systems, and finishing a bit of work often means waiting for the last bit of data to be manipulated or retrieved. As we have explained many times before, the tail latency of massively distributed applications is often the determining factor in the overall latency in the application. And that is why the hyperscalers are always trying to get predictable, consistent latency across all communication across a network of systems rather than trying to drive the lowest possible average latency and letting tail latencies wander all over the place. The nanoPU research set out, says Ibanez, to answer this question: What would it take to absolutely minimize RPC median and tail latency as well as software processing overheads?


RESTful Applications in An Event-Driven Architecture

There are many use cases where REST is just the ideal way to build your applications/microservices. However, increasingly, there is more and more demand for applications to become real-time. If your application is customer-facing, then you know customers are demanding a more responsive, real-time service. You simply cannot afford to not process data in real-time anymore. Batch processing (in many modern cases) will simply not be sufficient. RESTful services, inherently, are polling-based. This means they constantly poll for data as opposed to being event-based where they are executed/triggered based on an event. RESTful services are akin to the kid on a road trip continuously asking you “are we there yet?”, “are we there yet?”, “are we there yet?”, and just when you thought the kid had gotten some sense and would stop bothering you, he asks again “are we there yet?”. Additionally, RESTful services communicate synchronously as opposed to asynchronously. What does that mean? A synchronous call is blocking, which means your application cannot do anything but wait for the response.


Application Security Tools Are Not up to the Job of API Security

With the advent of a microservice-based API-centric architecture, it is possible to test each of the individual APIs as they are developed rather than requiring a complete instance of an application — enabling a “shift left” approach allowing early testing of individual components. Because APIs are specified earliest in the SDLC and have a defined contract (via an OpenAPI / Swagger specification) they are ideally suited to a preemptive “shift left” security testing approach — the API specification and underlying implementation can be tested in a developer IDE as a standalone activity. Core to this approach is API-specific test tooling as contextual awareness of the API contract is required. The existing SAST/DAST tools will be largely unsuitable in this application — in the discussion on DAST testing to detect BOLA we identified the inability of the DAST tool to understand the API behavior. By specifying the API behavior with a contract the correct behavior can be enforced and verified enabling a positive security model as opposed to a black list approach such as DAST.


Microservice Architecture – Introduction, Challeneges & Best Practices

In a microservice architecture, we break down an application into smaller services. Each of these services fulfills a specific purpose or meets a specific business need for example customer payment management, sending emails, and notifications. In this article, we will discuss the microservice architecture in detail, we will discuss the benefits of using it, and how to start with the microservice architecture. In simple words, it’s a method of software development, where we break down an application into small, independent, and loosely coupled services. These services are developed, deployed, and maintained by a small team of developers. These services have a separate codebase that is managed by a small team of developers. These services are not dependent on each other, so if a team needs to update an existing service, it can be done without rebuilding and redeploying the entire application. Using well-defined APIs, these services can communicate with each other. The internal implementation of these services doesn’t get exposed to each other.



Quote for the day:

"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson

Daily Tech Digest - September 17, 2021

How CISOs and CIOs should share cybersecurity ownership

While the CISO is responsible for various elements of cybersecurity day-to-day and forward planning, in most organizations, the buck often stops with the CIO, who reports to the CEO and the board of directors, Finch says. “As a result, the CIO cannot hand responsibility to the CISO entirely. Instead, they need to retain awareness of security strategy and ensure that it isn’t putting the organization’s overall strategy in danger—or vice versa.” Brad Pollard, CIO at Tenable, says today's CIOs have a range of security accountabilities founded in availability, performance, budget, and the timely delivery of projects. “CIOs enable and support every business unit within an organization. In doing so, they inherit the information security requirements for each business unit.” For example, the CISO may well be charged with defining security parameters such as service level agreements for vulnerability remediation or access controls, but it falls to the CIO to deliver on these requirements for all business units, spanning all the company’s technologies, Pollard says. 


A Guide to DataOps: The New Age of Data Management

In a data-driven competitive landscape, ignoring the benefits of data, or even the inability to extract its fullest potential, can only mean a disastrous end for organizations. To be sure, many of these organizations are collecting plenty of data. They just don’t want, know, or have the processes in place to use it. Part of the problem is legacy data pipelines. As data moves from source to target in the data pipeline, each stage has its own idea of what that data means and how it can be put to use. This disconnected view of data renders the data pipelines brittle and resistant to change, in turn making the organizations slow to react in the face of change. ... DataOps, short for data operationalization, is a collaborative data management approach that emphasizes communication, integration, and automation of data pipelines within organizations. Unlike data storage management, DataOps is not primarily concerned about ‘storing’ the data. It’s more concerned about ‘delivery’, i.e., making the data readily available, accessible, and usable for all the stakeholders. 


How To Reduce Context Switching as a Developer

Often, developers struggle to balance timely communication and context switching. As we already know, context switching has a negative impact on your productivity because it prevents you from reaching a deep state of work. On the other hand, when colleagues ask a question, you want to help them promptly. For example, a developer asks for your assistance and might be blocked if you don’t help him. But should you sacrifice your flow state to help your colleague? Well, the answer is somewhat divided. Try to find a balance between responding on time and prioritising your work. Asynchronous communication has become a popular approach to tackle this problem. Instead of calling a meeting for each problem, communicate with the involved people and resolve it via text-based communication such as Slack. Moreover, it would help if you blocked time in your calendar to reach a flow state and leave time slots open for meetings or handling questions from colleagues. For instance, you can block two slots of three hours of deep work and leave two slots of one hour for asynchronous communication.


Stop Using CSVs for Storage — Pickle is an 80 Times Faster Alternative

Storing data in the cloud can cost you a pretty penny. Naturally, you’ll want to stay away from the most widely known data storage format — CSV — and pick something a little lighter. That is, if you don’t care about viewing and editing data files on the fly. ... In Python, you can use the pickle module to serialize objects and save them to a file. You can then deserialize the serialized file to load them back when needed. Pickle has one major advantage over other formats — you can use it to store any Python object. That’s correct, you’re not limited to data. One of the most widely used functionalities is saving machine learning models after the training is complete. That way, you don’t have to retrain the model every time you run the script. I’ve also used Pickle numerous times to store Numpy arrays. It’s a no-brainer solution for setting checkpoints of some sort in your code. Sounds like a perfect storage format? Well, hold your horses.


Data Management Strategy Is More Strategic than You Think

CXOs are in some ways the most visible representative inside large enterprises of what is, after all, a deeply felt human need to make sense of the world. We try to accomplish this in all parts of our lives including in our professional careers. It’s far more satisfying emotionally to work in an organization that uses data effectively to chart the way forward. But there are some pressing, contemporary drivers of urgency, too, not just an inherent human need. The pandemic radically accelerated awareness of data’s importance for both social and commercial resilience, especially in the face of repeated supply chain shocks and disruptions. But there was another factor too: most of the enterprise world has taken to working from home, operating complex orgs from the relative safety of social isolation, despite the additional challenges such isolation creates. The future of work has become problematized across the enterprise world and that raises questions and urgency around the future of information strategies to support the future of work.


UN Calls For Moratorium On Artificial Intelligence Tech That Threatens Human Rights

The report, which was called for by the UN Human Rights Council, looked at how countries and businesses have often hastily implemented AI technologies without properly evaluating how they work and what impact they will have. The report found that AI systems are used to determine who has access to public services, job recruitment and impact what information people see and can share online, Bachelet said. Faulty AI tools have led to people being unfairly denied social security benefits, while innocent people have been arrested due to flawed facial recognition. "The risk of discrimination linked to AI-driven decisions –- decisions that can change, define or damage human lives –- is all too real," Bachelet said. The report highlighted how AI systems rely on large data sets, with information about people collected, shared, merged and analysed in often opaque ways. The data sets themselves can be faulty, discriminatory or out of date, and thus contribute to rights violations, it warned. For instance, they can erroneously flag an individual as a likely terrorist. The report raised particular concern about the increasing use of AI by law enforcement, including as forecasting tools.


How Do Authentication and Authorization Differ?

As a user, you can usually see authentication happening (although it might be persistent, like staying logged into a website even if you close the browser tab) and you can often do things like changing your password or choosing which second factor you want to use. Users can’t change their authorization options and won’t see authorization happening. But you might see another authentication request if you try to do something that’s considered important enough that your identity has to be verified again before you are authorized to do it. Some banks will let you log in to your account and make payments you’ve done previously with your username and password, but ask you to use 2FA to set up a new payee. Conversely, authentication systems that use conditional access policies can recognize that you’re using the same device, IP address, location and network connection to access the same file share you access from that device, location and network every day to improve your productivity and not make you go through an authentication challenge.


Data Loss Protection Could Be Industry’s Next Big Trend

Data is an organisation’s most precious asset, and understanding the various data states can help determine the best security measures to deploy. Technology like DLP gives both IT and security personnel an overall perspective on the location, distribution, and use of information within an organisation. It eliminates all possibilities of data theft, including fines and lost income. If you’re worried about your upcoming audit and want to keep your data compliant with complex regulations, DLP is a great option for you. For companies wanting to protect their sensitive data from security breaches caused by increased worker mobility and the development of novel channels, the technology is a godsend. For DLP, success with cloud and virtual models has opened up new possibilities. Using business principles, these software tools identify and protect confidential and sensitive data, preventing unaccredited end-users from disclosing information that could endanger the firm.


Cloud Native Driving Change in Enterprise and Analytics

There is a democratization underway of data embedded into workflows and Slack, he said, but being able to expose data from applications or natively integrated in applications is the province of developers. Tools exist, Stanek said, for developers to make such data analytics more accessible and understandable by users. “We want to help people make decisions,” he said. “We also want to get them data at the right time, with the right context and volume.” Stanek said he sees more developers owning business applications, insights, and intelligence up to the point where end users can make decisions. “This industry is heading away from an isolated industry where business people are copying data into visualization tools and data preparation tools and analytics tools,” he said. “We are moving into a world where we will be providing all of this functionality as a headless functionality.” The rise of headless compute services, which do not have local keyboards, monitors, or other means of input and are controlled over a network, may lead to different composition tools that allow business users to build their own applications with low-code/no-code resources, Stanek said.


Is the Net Promoter Score ripe for replacement?

How can businesses measure the success of their marketing efforts? How does their current and future performance benchmark against competitors? How can they work out, for example, the levels of satisfaction and loyalty felt by their customers? The rise of social media during the last decade has simultaneously made these questions easier and in many ways more difficult to answer. On the one hand, the internet is bristling with all the necessary data required to determine how a given business is performing, as customers willingly – even eagerly – share thoughts and opinions which provide insights into such vital issues as customer satisfaction. On the other hand, the sheer volume of the data available can make it challenging to separate the essential from the non-essential. ... Clearly, then, the stage is set for new ways to measure performance: methods which are up-to-the-minute, capable of leveraging AI and machine learning technology to sift through swathes of data, and able to articulate actionable KPIs in a simple and accessible format.



Quote for the day:

"You think you can win on talent alone? Gentlemen, you don't have enough talent to win on talent alone." -- Herb Brooks, Miracle

Daily Tech Digest - September 16, 2021

Zero Trust Requires Cloud Data Security with Integrated Continuous Endpoint Risk Assessment

Most of us are tired of talking about the impact of the pandemic, but it was a watershed event in remote working. Most organizations had to rapidly extend their existing enterprise apps to all their employees, remotely. And since many have already embraced the cloud and had a remote access strategy in place, typically a VPN, they simply extended what they had to all users. CEO's and COO's wanted this to happen quickly and securely, and Zero Trust was the buzzword that most understood as the right way to make this happen. So vendors all started to explain how their widget enabled Zero Trust or at least a part of it. But remember, the idea of Zero Trust was conceived way back in 2014. A lot has changed over the last seven years. Apps and data that have moved to the cloud do not adhere to corporate domain-oriented or file-based access controls. Data is structured differently or unstructured. Communication and collaboration tools have evolved. And the endpoints people use are no longer limited to corporate-issued and managed domain-joined Windows laptops.


What We Can Learn from the Top Cloud Security Breaches

Although spending on cybersecurity grew 10% during 2020, this increase fell far short of accelerated investments in business continuity, workforce productivity and collaboration platforms. Meanwhile, spending on cloud infrastructure services was 33% higher than the previous year, spending on cloud software services was 20% higher, and there was a 17% growth in notebook PC shipments. In short, cybersecurity spending in 2020 did not keep up with the pace of digital transformation, creating even greater gaps in organizations’ ability to effectively address the security challenges introduced by public cloud infrastructure and modern containerized applications: complex environments, fragmented stacks and borderless infrastructure, not to mention the unprecedented speed, agility and scale. See our white paper, Introduction to Cloud Security Blueprint, for a detailed discussion of cloud security challenges, with or without a pandemic. In this blog post, we look at nine of the biggest cloud breaches of 2020, where “big” is not necessarily the number of data records actually compromised but rather the scope of the exposure and potential vulnerability.


When is AI actually AI? Exploring the true definition of artificial intelligence

Whatever the organisation, consumers insist on seeing instant results – with personalisation being ever more important. If this isn’t happening, businesses will start seeing ‘drop off’ as customers seek an alternative, which, in today’s competitive market, could prove disastrous. There is an opportunity now for businesses to combat this by implementing true, bespoke AI models that can sift through vast amounts of data and make its own intelligent decisions. After all, the amount of data being generated across the globe is skyrocketing, and organisations are continuing to share their data with one another – so organisation and analysis at this level is a must. However, it’s important to note that AI isn’t for everyone. The move to AI is a huge leap, so businesses must consider whether they actually need AI to achieve their goals. In some cases, investing in advanced analytics and insights is sufficient to help a business run, grow and create value. So, if advanced analytics does the job, why invest in AI? Most AI projects fail because there is no real adoption after the initial proof of concept. 


How DevOps teams are using—and abusing—DORA metrics

DORA stands for DevOps Research and Assessment, an information technology and services firm founded founded by Gene Kim and Nicole Forsgren. In Accelerate, Nicole, Gene and Jez Humble collected and summarized the outcomes many of us have seen when moving to a continuous flow of value delivery. They also discussed the behaviors and culture that successful organizations use and provide guidance on what to measure and why. ... Related to this is the idea of using DORA metrics to compare delivery performance between teams. Every team has its own context. The product is different with different delivery environments and different problem spaces. You can track team improvement and, if you have a generative culture, show teams how they are improving compared to one another, but stack-ranking teams will have a negative effect on customer and business value. Where the intent of the metrics is to manage performance rather than track the health of the entire system of delivery, the metrics push us down the path toward becoming feature factories.


Intel AI Team Proposes A Novel Machine Learning (ML) Technique, MERL

What is unique about their design is that it allows all learners to contribute to and draw from a single buffer at the same time. Each learner had access to everyone else’s experiences, which aided its own exploration and made it significantly more efficient at its own task. The second group of agents, dubbed actors, was tasked with combining all of the little movements in order to achieve the broader goal of prolonged walking. Since these agents were rarely close enough to register a reward, the team used a genetic algorithm, a technique that simulates biological evolution through natural selection. Genetic algorithms start with possible solutions to a problem and utilize a fitness function to develop the best answer over time. They created a set of actors for each “generation,” each with a unique method for completing the walking job. They then graded them according to their performance, keeping the best and discarding the others. The following generation of actors was the survivors’ “offspring,” inheriting their policies.


Backend For Frontend Authentication Pattern with Auth0 and ASP.NET Core

The Backend For Frontend (a.k.a BFF) pattern for authentication emerged to mitigate any risk that may occur from negotiating and handling access tokens from public clients running in a browser. The name also implies that a dedicated backend must be available for performing all the authorization code exchange and handling of the access and refresh tokens. This pattern relies on OpenID Connect, which is an authentication layer that runs on top of OAuth to request and receive identity information about authenticated users. ... Visual Studio ships with three templates for SPAs with an ASP.NET Core backend. As shown in the following picture, those templates are ASP.NET Core with Angular, ASP.NET Core with React.js, and ASP.NET Core with React.js and Redux, which includes all the necessary plumbing for using Redux. ... The authentication middleware parses the JWT access token and converts each attribute in the token as a claim attached to the current user in context. Our policy handler uses the claim associated with the scope for checking that the expected scope is there


REvil/Sodinokibi Ransomware Universal Decryptor Key Is Out

While Bitdefender isn’t able to share details about the key, given the fact that the firm mentioned a “trusted law enforcement partner,” Boguslavskiy conjectured that Bitdefender likely “conducted an advanced operation on REvil’s core servers and infrastructures with or for European law enforcement and was somehow able to reconstruct or obtain the master key.” Using the key in a decryptor will unlock any victim, he said, “unless REvil redesigned their entire malware set.” But even if the reborn REvil did redesign the original malware set, the key will still be able to unlock victims that were attacked prior to July 13, Boguslavskiy said. Advanced Intel monitors the top actors across all underground discussions, including on XSS, ​​a Russian-language forum created to share knowledge about exploits, vulnerabilities, malware and network penetration. So far, the intelligence firm hasn’t spotted any substantive discussion about the universal key on these underground forums. Boguslavskiy did note, however, that the administrator of XSS has been trying to shut down discussion threads, since they “don’t see any use in the gossip.”


What to expect from SASE certifications

Secure access service edge (SASE) is a network architecture that rolls SD-WAN and security into a single, centrally managed cloud service that promises simplified WAN deployment, improved security, and better performance. According to Gartner, SASE’s benefits are transformational because it can speed deployment time for new users, locations, applications and devices as well as reduce attack surfaces and shorten remediation times by as much as 95%. ... The level one certification has twelve sections, and it takes about a day to complete. Level two has five stages, takes about half a day, and requires that applicants first complete level one. The training and testing are delivered on the Credly platform. “It integrates with LinkedIn, so it’s automatically shared on your LinkedIn profile,” Webber-Zvik says. As of Sept. 1, more than 1,000 people have earned level one certification, and they represent multiple levels of professional experience and job categories. Half are current Cato customers, and some of the rest may be considering going with Cato, says Dave Greenfield, Cato’s director of technology evangelism.


The difference between physical and behavioural biometrics, and which you should be using

The debate around digital identity has never been more important. The COVID-19 pandemic pushed us almost entirely online, with many businesses pivoting to become e-tailers almost overnight. Our reliance on online services – whether ordering a new bank card, getting your groceries delivered, or talking to friends – has given bad actors the perfect hunting ground. With the advent of the internet, the world moved online. However, authentication processes from the physical world were digitised rather than re-designed for the digital world. The processes businesses digitised lack security, are cumbersome and don’t preserve privacy. For example, the password: it is now 60 years old, yet still relied on today to protect our identities and data. Digitised processes have enabled the rise in online fraud, scams, social engineering, and synthetic identities. Our own research highlighted how a quarter of consumers globally receive more scam text messages than they get from friends and families, with over half (54%) of UK consumers stating that they trust organisations less after receiving a scam message.


Resetting a Struggling Scrum Team Using Sprint 0

It is hard to determine in Sprint 0 if you are done. There is a balance to strike between performing enough upfront planning and agreement to provide clarity and comfort, and taking significant time away from delivery to plan for every eventuality that could appear in the sprints that follow Sprint 0. After running these sessions, we entered our first delivery sprint in the hopes that the agreed ways of working would help us eliminate any challenges we found together. However, we encountered a few rocks that we had to navigate around on our path to quieter seas. One early issue that surfaced was that of the level of bonding within the team. Despite the new team members settling in well, and communication channels being agreed upon to help Robin and the others collaborate, it became clear that the developer group needed to build trust to work effectively. Silence was a big part of many planning and refinement ceremonies. This was not a team of strong extroverts, and I had concerns that the team was not comfortable speaking up.



Quote for the day:

"Leadership is the art of influencing people to execute your strategic thinking" -- Nabil Khalil Basma

Daily Tech Digest - September 15, 2021

Understanding the journey of breached customer data

It’s known that hackers often use the names of the breached organisation when marketing, selling or leaking their stolen data. So, it’s worth deploying a system that monitors for supplier names, as well as your own, on forums and ransomware sites. This includes searching for common typos and variants of these names. There are, however, some limitations to this method, as these searches could lead to lots of false positives. Security teams need to filter through the data to find matches, but this can take time. Businesses can use database identifiers to improve monitoring efficiency. These take the form of unique strings within databases, such as server names and IP addresses. Teams can then match metadata included in a data leak when searching through database dumps. Patterns within data, including account numbers, customer IDs and reference numbers, are also useful for identification. Another technique is ‘watermarking’ data by adding synthetic identities to a data set. Unique identifiers are used in your data sets or those you share in your digital supply chain so you can confirm if a breach includes data from your business or a supplier.


Top 12 Cloud Security Best Practices for 2021

In a private data center, the enterprise is solely responsible for all security issues. But in the public cloud, things are much more complicated. While the buck ultimately stops with the cloud customer, the cloud provider assumes the responsibility for some aspects of IT security. Cloud and security professionals call this a shared responsibility model. Leading IaaS and platform as a service (PaaS) vendors like Amazon Web Services (AWS) and Microsoft Azure provide documentation to their customers so all parties understand where specific responsibilities lie according to different types of deployment. The diagram below, for example, shows that application-level controls are Microsoft’s responsibility with software as a service (SaaS) models, but it is the customer’s responsibility in IaaS deployments. For PaaS models, Microsoft and its customers share the responsibility. ... To prevent hackers from getting their hands on access credentials for cloud computing tools, organizations should train all workers on how to spot cybersecurity threats and how to respond to them.


How to Deploy Disruptive Technologies with Minimal Disruption

A disruptive technology can have a particularly hard impact on end users. “Discuss change, and the human reaction to it, as part of your educational process, acknowledging that it’s hard and everyone at every level of the organization must go through it,” says Tammie Pinkston, director of organizational change management at technology research and advisory firm ISG. “We recently held a client training [program] where individuals used a sticker to show where they were on the change curve, mapping themselves each day with indicators so we could see movement.” If a disruptive technology will impact multiple departments, all parties should be involved in the rollout process. “One of the reasons it's important to assess all the different interactions and impacts is to bring in the right expertise and oversight,” Lightman says. This may, for instance, require seeking input and support from HR and security teams. “It's better to be overly cautious than to have an issue arise later when you didn't include representation from a department,” he notes. Still, despite best efforts, it remains possible to overlook some technology stakeholders.


Update on .NET Multi-platform App UI (.NET MAUI)

.NET Multi-platform App UI (.NET MAUI) makes it possible to build native client apps for Windows, macOS, iOS, and Android with a single codebase and provides the native container and controls for Blazor hybrid scenarios. .NET MAUI is a wrapper framework and development experience in Visual Studio that abstracts native UI frameworks already available – WinUI for Windows, Mac Catalyst for macOS/iPadOS, iOS, and Android. Although it’s not another native UI framework, there is still a significant amount of work to provide optimal development and runtime experiences across these devices. The .NET team has been working hard with the community in the open on its development and we are committed to its release. Unfortunately, .NET MAUI will not be ready for production with .NET 6 GA in November. We want to provide the best experience, performance, and quality on day 1 to our users and to do that, we need to slip the schedule. We are now targeting early Q2 of 2022 for .NET MAUI GA. In the meantime, we will continue to enhance Xamarin and recommend it for building production mobile apps and continue releasing monthly previews of .NET MAUI.


8 top cloud security certifications

As companies move more and more of their infrastructure to the cloud, they're forced to shift their approach to security. The security controls you need to put in place for a cloud-based infrastructure are different from those for a traditional datacenter. There are also threats specific to a cloud environment. A mistake could put your data at risk. It's no surprise that hiring managers are looking for candidates who can demonstrate their cloud security know-how—and a number of companies and organizations have come up with certifications to help candidates set themselves apart. As in many other areas of IT, these certs can help give your career a boost. "Cloud security certifications can set professionals up for long-term career success in designing, operating, and maintaining secure cloud environments for today’s enterprises," says Joe Vadakkan, senior director of services alliances at Optiv. "In addition to the process being a fun learning experience, each certification offers a unique benefit to understanding the security controls, associated risks, and dynamic needs of cloud operating models."


Juniper enables Mist to handle network-fabric management

Juniper Networks is embracing an open campus-fabric management technology supported by other major networking vendors and at the same time making it simpler to use by removing much of the manual work it can require. The company is adding Ethernet VPN-Virtual Extensible XLAN (EVPN-VXLAN) support to its Mist AI cloud-based management platform let customers streamline network operations. EVPN-VXLAN separates the underlying physical network from the virtual overlay network offering integrated Layer 2/Layer 3 connectivity as well as programmability, automation and network segmentation among other features. The open technology is offered in a variety of forms by most networking vendors including Cisco, Arista, Aruba and others. “Many of today’s campus networks leverage proprietary technologies and complicated L2/L3 architectures that weren’t designed to meet modern requirement,” wrote Jeff Aaron, vice president of Enterprise Marketing at Juniper in a blog about the announcement. 


Dow CIO: Digital transformation demands rethinking talent strategy

When it comes to investing in digital, companies have many choices. There is a lot you could do, but you need to focus on what you should do. One thing is certain: You should invest in your people if you want to be successful with your digital transformation. This is not just about the technology, but using technology to change the way employees work. My IT organization continually develops its tech skills with curricula on a variety of topics, including cloud computing, machine learning, and the entire data space from architecture to data storage and data visualization. We’re also refreshing our skills around threat identification, user experience design, and expanding our programming skills by learning different programming languages. But IT organizations also need to grow their soft skills. This includes improving employees’ business acumen, so they understand how their company works and how it makes money. This not only helps organizations identify opportunities but connects them to how the tools being implemented help drive value.

Ballerina has unique features that make it particularly worthwhile for smaller programs. Most other scripting languages that are designed for smaller programs have significant differences from Ballerina in that they are dynamically typed and they don't have the unique scalability and robustness features that Ballerina has. Problems in the pre-cloud era that you could solve with other scripting languages are still relevant problems. Except now, network services are involved; robustness is now more important than ever. With standard scripting languages, a 50-line program tends to become an unmaintainable 1000-line program a few years later, and this doesn’t scale. Ballerina can be used to solve problems addressed with scripting language programs but it's much more scalable, more robust, and more suitable for the cloud. Scripting languages also typically don't have any visual components, but Ballerina does.


Tech Nation welcomes tech companies to Net Zero 2.0 programme

For the first time, the Net Zero programme from Tech Nation is welcoming space tech companies, with operations within the space gaining momentum. Satellite imaging, for example, provides a way to observe large areas from space to rapidly identify illegal activities such as deforestation or mining; monitor supply chains; and verify nature-based solutions such as carbon offsetting. This type of technology is gaining traction rapidly as countries across the world look for innovative ways to combat climate change and as multinationals seek to achieve their recently set net zero goals. Earth Blox is using satellite data to identify deforestation or mining activities and monitor supply chains and support nature-based solutions, while Sylvera uses machine learning and satellite data to verify the carbon offsetting industry. Additionally, Satellite Vu looks to measure the thermal footprint of any building on the planet every 1-2 hours, helping to drastically increase the energy efficiency of buildings, factories and power stations globally.


Travis CI Flaw Exposed Secrets From Public Repositories

The effects of the vulnerability meant that if a public repository was forked, someone could file a pull request and then get access to the secrets attached to the original public repository, according to Travis CI's explanation. Travis CI's documentation says that secrets shouldn't be available to external pull requests, says Patrick Dwyer, an Australian software developer who works with the Open Web Application Security Project, known as OWASP. "They [Travis CI] must have introduced a bug and made those secrets available," Dwyer says. Travis CI's flaw represents a supply-chain risk for software developers and any organization using software from projects that use Travis CI, says Geoffrey Huntley, an Australian software and DevOps engineer. "For a CI provider, leaking secrets is up there with leaking the source code as one of the worst things you never want to do," Huntley says. Travis CI has issued a security bulletin, but some are criticizing the company that it's insufficient given the gravity of the vulnerability. 



Quote for the day:

"Leaders must be close enough to relate to others, but far enough ahead to motivate them." -- John C. Maxwell

Daily Tech Digest - September 14, 2021

Honing Cybersecurity Strategy When Everyone’s a Target for Ransomware

While not all hackers are out for the money, if they are, they become particularly crafty at plying their trade. What malicious actors are often looking for are the “keys to the kingdom” — the most lucrative mission-critical information, passwords, contacts or accounts — which is usually found within the C-suite. And not only do C-suite targets have the most valuable organizational data, but they are also the decision-makers of whether to pay a ransom. This creates two situations that put executives under even greater threat. First, it makes a ransomware attack on a C-suite decision maker incredibly efficient, which achieves maximum ROI for threat actors. Second, it makes a C-suite executive’s personal communications incredibly valuable and particularly vulnerable. The tighter cybercriminals can twist the screws with embarrassing business and private communications threatened for release, the greater their chances for payment – and often, the more they can demand. The sad reality is that the majority of executives, and particularly their direct reports, are incredibly soft targets.


What Do Engineers Really Think About Technical Debt?

It's no surprise that technical debt causes bugs, outages, quality issues and slows down the development process. But the impact of tech debt is far greater than that. Employee morale is one of the most difficult things to manage, especially now that companies are switching to long-term remote work solutions. Many Engineers mentioned that technical debt is actually a major driver of decreasing morale. They often feel like they are forced to prioritize new features over vital maintenance work that could improve their experience and velocity and this is taking a significant toll. ... More than half of respondents claim that their companies do not deal with technical debt well, highlighting that the divide between engineers and leadership is widening rather than closing. Engineers are clearly convinced that technical debt is the primary reason for productivity losses, however, they seem to be struggling to make it a priority. Yet, making the case for technical debt could help engineers ship up to 100% faster. As much as 66% of Engineers believe the team would ship up to 100% faster if they had a process for technical debt. 


Human-Machine Understanding: how tech helps us to be more human

Human-Machine Understanding, or HMU, is one of the lines of enquiry currently getting me out of bed in the morning, and I’m sure that it will shape a new age of empathic technology. In the not-too-distant future, we’ll be creating machines that comprehend us, humans, at a psychological level. They’ll infer our internal states – emotions, attention, personality, health and so on – to help us make useful decisions. But let’s just press pause on the future for a moment, and track how far we’ve come. Back in 2015, media headlines were screaming about the coming dystopia/utopia of artificial intelligence. On one hand, we were all doomed: humans faced the peril of extinction from robots or were at least at risk of having their jobs snatched away by machine learning bots. On the other hand, many people – me included – were looking forward to a future where machines answered their every need. We grasped the fact that intelligent automation is all about augmenting human endeavour, not replacing it.

Essential Soft Skills for IT leaders in a Remote World

People in positions of authority often aim to project unbreakable confidence, but a better path to building connections is through honesty. Foremost, being open about insecurities, uncertainties, and failures is humanizing—a critical trait in the age of Zoom. Conversely, ultra-strict managers may find their teammates become reticent to speak up about risks they see. Such an environment is an anathema to multidisciplinary IT fields, given the need for transparent workflows. Being vulnerable at work is not only about you trying to show something to your teammates, it is also about establishing and growing a safe environment for the colleagues you work with. In my experience, it’s hard for people to speak up about sensitive topics like challenges, difficult conversations or if they don’t agree with someone at work. But these aspects are much easier when the team, including leadership, has built an environment, where everyone trusts that they are free to express their opinions and share their feelings about their work.

The past, present and future of IoT in physical security

As ever, the amount of storage that higher-resolution video generates is the limiting factor, and the development of smart storage technologies such as Zipstream has helped tremendously in recent years. We will likely see further improvements in smart storage and video compression that will help make higher-resolution video possible. Cybersecurity will also be a growing concern for both manufacturers and end users. Recently, one of Sweden’s largest retailers was shut down for a week because of a hack, and others will meet the same fate if they continue to use poorly secured devices. Any piece of software can contain a bug, but only developers and manufacturers committed to identifying and fixing these potential vulnerabilities can be considered reliable partners. Governments across the globe will likely pass new regulations mandating cybersecurity improvements, with California’s recent IoT protection law serving as an early indicator of what the industry can expect. Finally, ethical behavior will continue to become more important. A growing number of companies have begun foregrounding their ethics policies, issuing guidelines for how they expect technology like facial recognition to be used — not abused.


Leading under pressure

“There is a well-accepted and common wisdom that success breeds confidence, and that confidence helps you handle pressure better,” explained Jensen. “My read, without having talked to Simone Biles or knowing exactly what is going on in her head, is that there is a countervailing force to that positive cycle, which is that as you accrue status and visibility, the ‘importance’ piece gets greatly magnified. The stakes expand. They begin to encompass your self-worth and the weight of the 330 million people you are carrying along for the ride.” Business leaders are subject to this phenomenon, too. As they reach higher levels of the corporate hierarchy, the importance of their decisions and actions grows, and the stakes rise. And like pressure itself, the element of importance is a double-edged sword. ... How do you manage importance during these peak pressure moments? The secret is to understand that how you perceive the stakes in any given situation can be controlled. “When you get into peak pressure moments, all you can think about is how important [the stakes are], what you might gain, what you might lose,” said Jensen.


IT leaders facing backlash from remote workers over cybersecurity measures: HP study

Ian Pratt, global head of security for personal systems at HP, said the fact that workers are actively circumventing security should be a worry for any CISO. "This is how breaches can be born," Pratt said. "If security is too cumbersome and weighs people down, then people will find a way around it. Instead, security should fit as much as possible into existing working patterns and flows with unobtrusive, secure-by-design and user-intuitive technology. Ultimately, we need to make it as easy to work securely as it is to work insecurely, and we can do this by building security into systems from the ground up." IT leaders have had to take certain measures to deal with recalcitrant remote workers, including updating security policies and restricting access to certain websites and applications. But these practices are causing resentment among workers, 37% of whom say the policies are "often too restrictive." The survey of IT leaders found that 90% have received pushback because of security controls, and 67% said they get weekly complaints about it.


OSI Layer 1: The soft underbelly of cybersecurity

The metadata from a switch can indicate whether a rogue device is present. This can be accomplished without mirroring traffic to respect privacy within sensitive IT environments. Supply chain exposure is more complex than managing where you order from: It’s a two-fold problem involving both software and hardware. It’s understood that many applications bundle libraries and controls from third parties that are further outside of your purview. Attackers exploit weaknesses and defects from an array of targets, including unsecured source code, outdated network protocols (downgrade attacks), unsecured third-party servers, and update mechanisms. Software safeguarding software is under your control: deploying least privilege principles, endpoint protection, and due diligence to audit and assess third party partners are essential and reasonable precautions. Hardware is another story altogether. It’s less obvious when a fully functioning Raspberry Pi has been modified or telecommunications equipment has been compromised by a state actor, as it looks and plays the part without any irregularities.


Desensitized To Devastation: Strategies For Reaching CISOs In Today’s Cyber Landscape

Hackers only need to be right once. One set of compromised credentials puts them on their way to snatching your critical assets. Security teams, on the other hand, have to be right all the time. There’s no logging off at the end of the 9-to-5 workday for criminals. They’re active when you’re awake, they’re active when you’re asleep and they’re active when you’re celebrating the holidays with your families. All it takes is one right guess of a password and a company could lose millions of dollars, customer data, its reputation and its stock price — and the CISO could lose their job. Businesses can’t afford to have weak security infrastructures that aren’t monitoring for and shutting down threats 24/7. ... Ransomware was up 93% in 2021 from 2020, according to Check Point, and we’ve recently suffered some major cyberattacks. The country has been hit with attacks that have massive implications for daily life and business, like the Colonial Pipeline and Kaseya attacks. And external threats aren’t all we have to worry about. 


Bad News: Innovative REvil Ransomware Operation Is Back

Unfortunately, with its infrastructure coming back online, REvil appears to be back. Notably, all victims listed on its data leak site have had their countdown timers reset, Bleeping Computer reports. Such timers give victims a specified period of time to begin negotiating a ransom payment, before REvil says it reserves the right to dump their stolen data online. REvil is one of a number of ransomware operations that regularly tells victims that it's stolen sensitive data, before it forcibly encrypts systems and threatens to leak the data if they don't pay. But REvil's representatives have been caught lying before, by claiming to have stolen data as they extort victims into paying, only to admit later that they never stole anything. Why might the infrastructure have come back online, including the payments portal, which accepts bitcoin and monero? Numerous experts have suggested REvil was just laying low in the wake of the Biden administration pledging to get tough. Perhaps the main operators and developers opted to relocate to a country from which it might be safer to run their business. Or maybe they were just taking a vacation.



Quote for the day:

"You have two choices, to control your mind or to let your mind control you." -- Paulo Coelho